Skip to content

Conversation

@Brijesh-Thakkar
Copy link

Which issue does this PR close?

Closes #3016

Rationale for this change

to_json could emit invalid JSON when encountering special floating-point values
such as NaN, +Infinity, or -Infinity. These values are not valid in JSON and
resulted in incorrect or unparsable output.

Apache Spark normalizes such values to null when converting to JSON. Since
DataFusion-Comet aims to be Spark-compatible, this behavior needed to be aligned.

What changes are included in this PR?

  • Normalize NaN, +Infinity, and -Infinity values to null during to_json
    conversion
  • Ensure to_json always produces valid JSON output
  • Add a regression test covering special floating-point values

How are these changes tested?

  • Added a unit test verifying to_json behavior for NaN, +Infinity, and
    -Infinity values
  • All existing tests in the native/spark-expr crate pass

Copilot AI review requested due to automatic review settings December 31, 2025 20:45
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes invalid JSON generation when to_json encounters special floating-point values (NaN, Infinity, -Infinity). These values are now normalized to null for Spark compatibility, ensuring valid JSON output is always produced.

Key Changes:

  • Introduced normalize_special_floats function to convert special float string representations to null
  • Modified array_to_json_string to apply normalization after casting non-struct arrays to strings
  • Added comprehensive test coverage for NaN and Infinity handling

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

builder.append_null();
} else {
match arr.value(i) {
"Infinity" | "-Infinity" | "NaN" => builder.append_null(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't reviewed this in detail yet, but it seems odd to handle these values after they have already been converted to strings. Could the check not happen when converting float to string instead?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that handling this earlier would be preferable in general. In this case, to_json delegates primitive type handling to spark_cast, and the goal here was to avoid changing spark_cast behavior globally since it is used by other expressions where preserving "NaN" / "Infinity" string output may be expected.

Normalizing the values at the to_json layer keeps the change scoped specifically to JSON semantics while still aligning the output with Spark’s behavior.

That said, I’m happy to move the check earlier or adjust the approach if you think handling this during float-to-string conversion would be more appropriate for Comet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: to_json does not support +Infinity, -Infinity for numeric types

2 participants