Skip to content

Conversation

@RahulC7
Copy link
Contributor

@RahulC7 RahulC7 commented Dec 5, 2025

Summary:
We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework.

In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.

Reviewed By: hsharma35

Differential Revision: D88055443

Copilot AI review requested due to automatic review settings December 5, 2025 16:19
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 5, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16099

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 34d9597 with merge base 56e131b (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 5, 2025
@meta-codesync
Copy link

meta-codesync bot commented Dec 5, 2025

@RahulC7 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D88055443.

@github-actions
Copy link

github-actions bot commented Dec 5, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR establishes a comprehensive testing framework for Cadence quantizers to ensure all quantizers are tested. It introduces a parameterized test structure that verifies quantizer annotation behavior and includes a meta-test (test_all_quantizers_have_annotation_tests) that automatically discovers all CadenceQuantizer subclasses and fails if any are neither tested nor explicitly excluded.

Key changes:

  • Refactored test structure to use parameterized testing with QUANTIZER_ANNOTATION_TEST_CASES
  • Added helper methods to build test graphs for matmul and linear operations
  • Implemented automated discovery test to enforce coverage of all future quantizers

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.

File Description
backends/cadence/aot/tests/test_quantizer_ops.py Adds parameterized quantizer annotation tests, graph builder methods, and comprehensive coverage enforcement via test_all_quantizers_have_annotation_tests
backends/cadence/aot/TARGETS Adds new dependencies: parameterized, graph_builder, pass_base, and torchao

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +193 to +200
if (
issubclass(obj, CadenceQuantizer)
and obj is not CadenceQuantizer
and obj.__module__ == quantizer_module.__name__
):
all_quantizers.add(obj)
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issubclass check on line 194 could raise a TypeError if obj is not a class. While this is unlikely given the inspect.isclass filter, it's safer to wrap this in a try-except or add an additional check. Consider:

for _, obj in inspect.getmembers(quantizer_module, inspect.isclass):
    try:
        if (
            issubclass(obj, CadenceQuantizer)
            and obj is not CadenceQuantizer
            and obj.__module__ == quantizer_module.__name__
        ):
            all_quantizers.add(obj)
    except TypeError:
        # Not a proper class, skip
        pass
Suggested change
if (
issubclass(obj, CadenceQuantizer)
and obj is not CadenceQuantizer
and obj.__module__ == quantizer_module.__name__
):
all_quantizers.add(obj)
try:
if (
issubclass(obj, CadenceQuantizer)
and obj is not CadenceQuantizer
and obj.__module__ == quantizer_module.__name__
):
all_quantizers.add(obj)
except TypeError:
# Not a proper class, skip
pass

Copilot uses AI. Check for mistakes.
"""Unit tests for verifying quantizer annotations are correctly applied."""

def _build_matmul_graph(self) -> tuple[torch.fx.GraphModule, torch.fx.Node]:
"""Build a simple graph with a matmul operation."""
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring says "Build a simple graph with a matmul operation" but doesn't document the return values. Consider adding:

"""Build a simple graph with a matmul operation.
    
Returns:
    tuple: (GraphModule, matmul_node) where matmul_node is the target operation node.
"""
Suggested change
"""Build a simple graph with a matmul operation."""
"""Build a simple graph with a matmul operation.
Returns:
tuple: (GraphModule, matmul_node) where matmul_node is the target operation node.
"""

Copilot uses AI. Check for mistakes.
return gm, matmul_nodes[0]

def _build_linear_graph(self) -> tuple[torch.fx.GraphModule, torch.fx.Node]:
"""Build a simple graph with a linear operation (no bias)."""
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the matmul graph builder, the docstring should document the return values:

"""Build a simple graph with a linear operation (no bias).
    
Returns:
    tuple: (GraphModule, linear_node) where linear_node is the target operation node.
"""
Suggested change
"""Build a simple graph with a linear operation (no bias)."""
"""Build a simple graph with a linear operation (no bias).
Returns:
tuple: (GraphModule, linear_node) where linear_node is the target operation node.
"""

Copilot uses AI. Check for mistakes.
expected_output_qspec: QuantizationSpec,
expected_input_qspecs: list[QuantizationSpec],
) -> None:
"""Parameterized test for quantizer annotations."""
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The docstring is quite minimal. Consider adding more details about what the test verifies:

"""Parameterized test for quantizer annotations.
    
Verifies that:
1. The quantizer correctly annotates the target operation node
2. The output quantization spec matches expectations
3. All input quantization specs match expectations and are correctly mapped
"""
Suggested change
"""Parameterized test for quantizer annotations."""
"""
Parameterized test for quantizer annotations.
Verifies that:
1. The quantizer correctly annotates the target operation node.
2. The output quantization spec matches expectations.
3. All input quantization specs match expectations and are correctly mapped.
"""

Copilot uses AI. Check for mistakes.
Comment on lines 174 to 186
for i, (input_node, input_qspec) in enumerate(
annotation.input_qspec_map.items()
):
self.assertEqual(
input_node,
op_node.args[i],
f"Input node mismatch at index {i}",
)
self.assertEqual(
input_qspec,
expected_input_qspecs[i],
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test assumes that the iteration order of annotation.input_qspec_map.items() matches the order of arguments in op_node.args. While dictionaries in Python 3.7+ maintain insertion order, this assumption could be fragile if the input_qspec_map is populated in a different order than the arguments. Consider using explicit indexing or documenting this assumption. For example:

for i, expected_qspec in enumerate(expected_input_qspecs):
    input_arg = op_node.args[i]
    self.assertIn(
        input_arg,
        annotation.input_qspec_map,
        f"Missing qspec for input at index {i}",
    )
    self.assertEqual(
        annotation.input_qspec_map[input_arg],
        expected_qspec,
        f"Input qspec mismatch at index {i}",
    )
Suggested change
for i, (input_node, input_qspec) in enumerate(
annotation.input_qspec_map.items()
):
self.assertEqual(
input_node,
op_node.args[i],
f"Input node mismatch at index {i}",
)
self.assertEqual(
input_qspec,
expected_input_qspecs[i],
for i, expected_qspec in enumerate(expected_input_qspecs):
input_arg = op_node.args[i]
self.assertIn(
input_arg,
annotation.input_qspec_map,
f"Missing qspec for input at index {i}",
)
self.assertEqual(
annotation.input_qspec_map[input_arg],
expected_qspec,

Copilot uses AI. Check for mistakes.
quantizer.annotate(gm)

annotation: QuantizationAnnotation = op_node.meta[Q_ANNOTATION_KEY]
self.assertTrue(annotation._annotated)
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accessing the private attribute _annotated directly may be fragile. Consider checking if the annotation exists and has the expected properties instead, or verify if there's a public API to check annotation status.

Suggested change
self.assertTrue(annotation._annotated)
self.assertTrue(getattr(annotation, "_annotated", False))

Copilot uses AI. Check for mistakes.
Comment on lines +64 to +76
# Test case definitions for quantizer annotation tests.
# Format: (name, graph_builder_fn, quantizer_instance, target_op, expected_output_qspec, expected_input_qspecs)
# Adding a new quantizer test only requires adding a tuple to this list.
QUANTIZER_ANNOTATION_TEST_CASES: list[
tuple[
str,
GraphBuilderFn,
CadenceQuantizer,
OpOverload,
QuantizationSpec,
list[QuantizationSpec],
]
] = [
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment describes the tuple format but the actual tuple structure in the list below has 6 elements, which matches the implementation. However, the description should clarify that each test case is a tuple, not just list the components. Consider reformatting as:

# Test case definitions for quantizer annotation tests.
# Each test case is a tuple with the following elements:
# - name: Test case identifier
# - graph_builder_fn: Function to build the graph
# - quantizer_instance: The quantizer to test
# - target_op: The operation being quantized
# - expected_output_qspec: Expected output quantization spec
# - expected_input_qspecs: List of expected input quantization specs

Copilot uses AI. Check for mistakes.
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Dec 8, 2025
Summary:

We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. 

In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.

Reviewed By: mcremon-meta, zonglinpeng, hsharma35

Differential Revision: D88055443
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Dec 8, 2025
Summary:

We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. 

In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.

Reviewed By: mcremon-meta, zonglinpeng, hsharma35

Differential Revision: D88055443
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Dec 8, 2025
Summary:

We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. 

In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.

Reviewed By: mcremon-meta, zonglinpeng, hsharma35

Differential Revision: D88055443
Copilot AI review requested due to automatic review settings December 8, 2025 21:08
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Dec 8, 2025
Summary:

We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. 

In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.

Reviewed By: mcremon-meta, zonglinpeng, hsharma35

Differential Revision: D88055443
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

name: str,
graph_builder_fn: GraphBuilderFn,
quantizer: CadenceQuantizer,
target: OpOverload,
Copy link

Copilot AI Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The target parameter is defined in the test method signature but is never used in the test implementation (lines 161-186). If this parameter is intended for documentation or future use, consider either:

  1. Using it to validate that the op_node's target matches the expected target
  2. Removing it from the test case definition if it's not needed

Example validation could be: self.assertEqual(op_node.target, target, "Operation target mismatch")

Copilot uses AI. Check for mistakes.
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Dec 8, 2025
Summary:

We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. 

In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.

Reviewed By: mcremon-meta, zonglinpeng, hsharma35

Differential Revision: D88055443
Copilot AI review requested due to automatic review settings December 8, 2025 22:00
RahulC7 added a commit to RahulC7/executorch that referenced this pull request Dec 8, 2025
Summary:

We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. 

In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.

Reviewed By: mcremon-meta, zonglinpeng, hsharma35

Differential Revision: D88055443
…6089)

Summary:

We test the quantizer we added in D87996796 correctly annotates the graph. 

We use the graph builder to build the graph with metadata(that's needed for quantizer.annotate to recognize the nodes), and we ensure that the quantization params are as expected.

Reviewed By: zonglinpeng, hsharma35

Differential Revision: D88053808
…6097)

Summary:

We test the CadenceWith16BitLinearActivationQuantizer. 

We use the graph builder to build the graph with metadata(that's needed for quantizer.annotate to recognize the nodes), and we ensure that the quantization params are as expected.

Reviewed By: zonglinpeng, hsharma35

Differential Revision: D88054651
…h#16098)

Summary:

We consolidate the two tests we created into a single testing function using parameterization. 

This will make testing future Quantizers much easier, and there will be a lot less code duplication.

Reviewed By: hsharma35, zonglinpeng

Differential Revision: D88054917
Summary:

We first create a list of quantizers that are currently not tested(we'll slowly reduce this to 0), and then we create a test to ensure that all future quantizers get tested using this framework. 

In order to do this, we needed to refactor how the current test is setup, specifically the parameterization.

Reviewed By: mcremon-meta, zonglinpeng, hsharma35

Differential Revision: D88055443
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

annotation.input_qspec_map.items()
):
expected_arg = op_node.args[i]
assert isinstance(expected_arg, torch.fx.Node)
Copy link

Copilot AI Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a bare assert statement in a test is inconsistent with unittest conventions. Consider using self.assertIsInstance(expected_arg, torch.fx.Node) instead. This provides better error messages and is consistent with the rest of the test framework.

Suggested change
assert isinstance(expected_arg, torch.fx.Node)
self.assertIsInstance(expected_arg, torch.fx.Node)

Copilot uses AI. Check for mistakes.
name: str,
graph_builder_fn: GraphBuilderFn,
quantizer: CadenceQuantizer,
target: OpOverload,
Copy link

Copilot AI Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The target parameter is defined but not used in the test method body. Consider adding a verification that op_node.target == target to ensure the test is validating the expected operation type. This would make the test more robust by explicitly checking that the graph builder function created the expected operation.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants