I want to evaluate inference performance using AMX-FP16 on new CPU while default docker or AI tool package not supported AMX-FP16 because of old version of onednn(3.2.0), so building onednn-3.3.0 and intel extensions for tensorflow-2.15.0 from source and using following model:
FP32, FP16 and BFloat32 Pretrained model:
wget https://zenodo.org/record/2535873/files/resnet50_v1.pb
howerver, log from onednn showed precision of this model was FP32 not FP16.