Skip to content

Conversation

@xdustinface
Copy link
Collaborator

Merges v0.41-dev into master for the v0.41.0 release.

QuantumExplorer and others added 30 commits October 21, 2025 01:52
* feat: use feature for console ui in dash spv

* fmt

* fix
* refactor: split big files

* small fixes
Add network-level penalization for peers that relay invalid ChainLocks.
When an invalid ChainLock is detected, the peer that sent it receives:
- Reputation score penalty (INVALID_CHAINLOCK misbehavior score)
- 10-minute temporary ban

Changes:
- Add penalize_last_message_peer() methods to NetworkManager trait
- Implement temporary_ban_peer() in PeerReputationManager
- Update chainlock.rs to penalize peers on validation errors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
The is_multiple_of method does not exist in Rust's standard library.
Use the modulo operator (%) instead for checking divisibility by 1000.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Refactor arithmetic to avoid potential underflow when filter_hashes
is empty. Use saturating_sub for the offset calculation before
applying it to stop_height.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…ation

Implements comprehensive InstantLock validation with cryptographic verification
and peer reputation penalization for invalid messages.

InstantLock Validation:
- Add full BLS signature verification using MasternodeListEngine
- Implement DIP 24 cyclehash-based quorum selection for signature validation
- Add structural validation: null checks for txid, signature, and inputs
- Split validation into validate_structure() and full validate() methods
- Require masternode engine for signature verification (security critical)

Peer Reputation System:
- Add INVALID_INSTANTLOCK misbehavior score (35 points)
- Penalize peers relaying invalid InstantLocks with 10-minute temporary ban
- Add convenience method penalize_last_message_peer_invalid_instantlock()
- Integrate peer penalization into InstantLock processing flow

Quorum Manager:
- Implement BLS signature verification using blsful library
- Verify signatures against quorum public keys with proper error handling
- Add detailed logging for signature verification success/failure

Event System:
- Add SpvEvent::InstantLockReceived event for validated InstantLocks
- Emit events after successful validation in chainlock handler
- Update FFI client to handle InstantLock events (with TODO for callbacks)

Storage Cleanup:
- Remove store_instant_lock() and load_instant_lock() methods
- InstantLocks are ephemeral and validated on receipt, not persisted
- Simplify storage trait by removing unused InstantLock persistence

Error Handling:
- Add InvalidSignature error variant for BLS verification failures
- Improve error messages with context (quorum type, height, reason)

Testing:
- Add comprehensive unit tests for null checks and validation paths
- Remove outdated instantsend_integration_test.rs (WalletManager API changed)
- Tests verify structural validation and request ID computation

Security:
Never accept InstantLocks from network without full BLS signature
verification. This implementation ensures cryptographic validation using
the proper quorum selected via DIP 24 cyclehash mechanism.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…non-overlapping suffix

Compute headers from CFHeaders message and verify continuity at expected overlap boundary; store only the non-overlapping tail. Remove invalid variable usage.
Remove the Selective mempool strategy which was not fully implemented and
relied on wallet integration that doesn't exist yet. The Selective strategy
only tracked transactions that the user recently sent, making it ineffective
for monitoring mempool transactions and InstantLocks.

Changes:
- Remove MempoolStrategy::Selective enum variant
- Remove recent_send_window_secs configuration field
- Remove with_recent_send_window() configuration method
- Remove record_send() and related recent send tracking logic
- Remove record_transaction_send() from client API
- Remove Selective validation from config validation
- Update MempoolFilter::new() to remove Duration parameter
- Update all MempoolFilter instantiations throughout codebase
- Remove test_selective_strategy() test
- Update all remaining tests to use FetchAll instead of Selective
- Set MempoolStrategy::FetchAll as the new default

FFI Changes:
- Remove FFIMempoolStrategy::Selective enum variant
- Update dash_spv_ffi_config_get_mempool_strategy() default to FetchAll
- Remove record_transaction_send() call from FFI client

With FetchAll strategy:
- Client fetches all announced mempool transactions (up to capacity limit)
- InstantLocks will be received and processed for all transactions
- Higher bandwidth usage but complete mempool visibility
- Suitable for monitoring network activity and testing

The BloomFilter strategy remains available for future privacy-focused
implementations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…nced

Prevents ISLOCK validation errors and peer bans during header sync when masternode engine/quorums are not yet available.
feat: add peer reputation penalization for invalid ChainLocks
Improved logging and proactive disconnection for peers sending messages with invalid checksums. This change ensures that potential message corruption is addressed promptly, enhancing network stability.
Enhanced TCP connection handling by introducing a stateful framing buffer to ensure complete message frames are read before decoding. This change improves error handling for invalid checksums and stream desynchronization, providing better diagnostics and stability in network communication.
Updated the read_some closure to remove unnecessary mutability, enhancing code clarity and maintainability. This change contributes to the ongoing improvements in TCP connection handling.
Implemented checks to validate the announced payload length against the maximum message size, preventing overflow and ensuring robust error handling. This enhancement improves the stability and security of TCP connection handling.
Updated the ValidationMode enum to include the Default trait, setting the default value to Full. This change streamlines the code and enhances usability by providing a clear default validation mode.
fix: resolve flakey connection errors due to TCP buffer issues
Added functionality to monitor phase changes in the sync coordinator, enhancing the progress emission logic. This update ensures that changes in synchronization phases are accurately reflected in the emitted progress, improving the overall synchronization process.
feat: track phase changes in sync coordinator
* chore: update GitHub Actions to use ubuntu-22.04-arm for all jobs

* fix(ffi): use c_char instead of i8 for platform-agnostic FFI types

Replace hardcoded i8 casts with c_char to fix ARM compilation errors.
On ARM architectures, c_char is u8 (not i8 as on x86/x86_64), causing
type mismatches in FFI test code.

- Replace all *mut i8 and *const i8 casts with c_char equivalents
- Update array declarations from [0i8; N] to [0u8; N] with proper casting
- Add c_char import to all affected test files

* fix(ffi): use .cast() instead of as cast for raw pointer conversion

Replace 'as *mut u8' casts with .cast::<u8>() method to fix clippy
warnings about unnecessary casts on ARM architectures.

On ARM, c_char is u8, making the explicit cast unnecessary. Using
.cast::<u8>() is the modern Rust approach and avoids clippy warnings
while maintaining platform compatibility.
This commit addresses two critical issues in the FFI layer:

1. Event Throttling: Limit event draining to 500 events per call to prevent
   UI/main thread flooding. Remaining events stay queued for the next drain.

2. Memory Leak Fix: Properly free heap-allocated stage_message strings in
   FFIDetailedSyncProgress after each progress callback to avoid per-callback
   memory leaks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit improves SPV client state management and monitoring:

1. Enhanced clear_storage() to fully reset in-memory state:
   - Resets chain state to network baseline
   - Clears sync manager filter state
   - Resets all statistics (peers, heights, downloads, etc.)
   - Clears received filter heights tracking
   - Resets mempool state and bloom filter
   - Previously only cleared on-disk storage

2. Added wallet balance display to status logs:
   - Shows balance in DASH denomination (8 decimal places)
   - Uses TypeId-based downcasting for WalletManager support
   - Balance displayed alongside sync progress metrics
   - Added filters_received count to status output

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
pauldelucia and others added 28 commits December 12, 2025 20:16
* segments cache struct and evict method generalized

* save segments to disk made generic inside the segments cache struct

* save_dirty_segments logic refactorized

* tbh I dont know waht I refactored here

* removed io module inside storage/disk

* you are right rabbit

* store in segment moved to SegmentCache

* sentinel headers creation moved to the Persistable trait and encapsulated there

* unified sentinel item behaviour - no longer compiles bcs the tip_high calculation

* renames

* new struct to manage the hashmap of segments and tip height moved - doesnt compile yet, wip

* get_headers generalized in the SegmentsCache struct - wip, not compiles

* store headers logic moved to the SegmentsCache and introduced method to better handle tip_height and storage index - wip, no compiles

* store_headers_impl using precomputed_hashes didn't provide real benefics with the current usage - wip, no compiles

* removed unused StorageManager::get_headers_batch methos - wip, no compiles

* removed warnings

* ensure segment loaded moved inside the SegmentsCache with a logic change, we ask for a segment and if it doesn't exist in memory we load it from disk - wip, no compiles

* const MAX_ACTIVE_SEGMENTS encapsulated - wip, no compiles

* removed one commit as it is fixed

* created a SegmentsCache::store_headers_at_height - wip, no compiles

* removed inconsistency when loading segments metadata

* removed to methods unused bcs now that behaviour is encapsulated

* building SegmentCache yip_height when creating the struct

* removed unused function

* some refactor and removed the notification enum and related behaviour - wip, no compiles

* disk storage manager worker no longer needs cloned headers to work

* renamed segments stoage fields

* removed new unused function

* evict logic removed

* save dirty segments logic moved into SegmentsCache

* clippy warnings fixed

* save dirty is now an instance method for the DiskStorageManager

* when persisting segments we try to create the parent dirs to ensure they exist

* improved logic to ensure different clear behaviour for SegmentsCache

* correctly rebuilding the block reverse index

* fixed bug found by test test_checkpoint_storage_indexing

* fixed bug updating tip_height in SegmentCache thanks spotted by test test_filter_header_segments

* fixed bug, we stop persisting segments after we find the first sentinel, to correctly initialize valid_count - bug spotted by test test_filter_header_persistence

* refactor: HEADER_PER_SEGMENT encapsulated inside segment and renamed to ITEMS_PER_SEGMENT - wip, no compiles

* block index rebuild logic moved into SegmentCache<BlockHeader> and load_segment_metadata renamed in flavor of a better name for its current behaviour being the block index contructor

* added some cool inlines

* fixed test that was creating a centinel filter header at height 0 making the segment not persist entirely

* renamed header reference to item in segments.rs so its clear that the new struct can work with any struct

* clippy warning fixed

* logging when storing multiple items simplified

* removed sentinel headers from the segments logic

* unit tests for the segment and segment_cache structs after the refactor (#259)

* removed unused methods after rebase

* renamed and removed old documentation for store_headers_impl

* refactorized and adjusted docs for conversion methods between stoage index, height and offset

* removed old comments and forcing to give sync_base_height when creating the SegmentCache

* quick fix to load sync_base_height if persisted before

* review comments addressed

* read block index operation made async

* using atomic write where we write to disk
* Implement segmented filter storage

* Store filters during initial sync

* Add startup initialization

* Add tests

* filter data storage uses SegmentCache generic struct to persist teh data

* fixed clippy warning

* correctly setting sync_base_height using the chain state in the new filters segment cache

---------

Co-authored-by: xdustinface <xdustinfacex@gmail.com>
We currently clone all network messages in `DashSpvClient::handle_network_message` when passed to `MessageHandler::handle_network_message`. Also some get cloned further down the track again. So this PR changes it to pass references.
* fix(headers2): Fix compressed headers protocol compatibility with Dash Core

This commit fixes critical incompatibilities between the Rust headers2
implementation and the C++ Dash Core reference implementation (DIP-0025).

- C++ uses offset=0 for "version not in cache" (full version present)
- C++ uses offset=1-7 for "version at position offset-1 in cache"
- Rust incorrectly used offset=7 for uncompressed, offset=0-6 for cached
- Now matches C++ semantics exactly

- C++ uses std::list with MRU (Most Recently Used) reordering
- Rust used a circular buffer without MRU reordering
- Changed to Vec<i32> with proper MRU behavior matching C++

- Fixed Decodable impl to read version when offset=0 (not offset=7)
- Added MissingVersion error variant for proper error handling

- Rewrote CompressionState to use Vec with MRU reordering
- Fixed compress() to use offset=0 for uncompressed versions
- Fixed decompress() to handle C++ offset semantics
- Updated Decodable to read version when offset=0
- Added comprehensive tests for C++ compatibility

- Enabled headers2 in handshake negotiation
- Enabled headers2 in sync manager
- Fixed phase transition when receiving empty headers2 response
- Re-enabled has_headers2_peer() check

- Added headers2_compatibility_test.rs with 12 tests verifying:
  - Version offset C++ semantics
  - MRU cache reordering behavior
  - Flag bit semantics
  - Serialization format compatibility
  - Cross-implementation compatibility
- All existing tests pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat(network): Add support for Headers2 in peer selection logic

This commit enhances the PeerNetworkManager to include support for the Headers2 protocol. It introduces logic to prefer peers that advertise Headers2 support when selecting a sync peer. The changes ensure that the current sync peer is updated accordingly, improving compatibility and efficiency in header synchronization.

- Added logic to check for Headers2 support in peer selection.
- Updated existing sync peer selection to prioritize peers with Headers2 capabilities.
- Ensured proper logging when a new sync peer is selected for Headers2.

This update aligns with ongoing efforts to improve protocol compatibility and performance in the network manager.

* refactor(headers2): Address code review feedback

- Consolidate redundant if/else blocks in compression state initialization
  (reduces 5 async lock acquisitions to 1)
- Simplify block locator construction (both branches returned vec![hash])
- Change process_headers to take &[CompressedHeader] instead of Vec to
  avoid cloning headers during sync

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(sync): Unify header sync finalization for regular and compressed headers

- Extract `finalize_headers_sync` helper to eliminate duplicated phase update logic
- Return decompressed header count from `handle_headers2_message` so both paths
  can track `headers_downloaded` and `headers_per_second` stats uniformly
- Remove special-case handling that skipped stats for compressed headers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(sync): Change handle_headers_message to take &[BlockHeader] instead of Vec

Avoids unnecessary cloning of headers vector when passing to the handler.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(headers2): Remove speculative genesis decompression comment

The defensive fallback mechanism (headers2_failed flag) is kept, but
the speculative comment about genesis compression state issues is removed.
With the C++ compatibility fix, this scenario should not occur.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(clippy): Remove needless borrows after slice refactor

Variables are already references from slice/iter, no need to borrow again.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: clippy warning

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* removed in memory storage manager

* tests related to on memory storage removed, we understand that it taste the same operations as the diskstoragemanager. next time use generics pls

* storage with tempo dir constructor renamed

* using different storage folders in examples
* removed the sync_base_height from the segmentCache struct

* tip height calculation updated to consider the sentinel block existence

* rolledback next_height method removal

* unit test updated to fit the new logic

* updated other tests to fit the new api requirements

* clippy warning fixed

* updated test to work with teh new api behaviour

* removed file

* comments improved
* removed version from PersistentSyncState

* removed persist and recovery logic of teh ChainState

* dropped dead code in sync_state

* removed logic to store sync state

* sync_state.rs deleted

* ffi docs udpated
Replace hardcoded mnemonic with a CLI argument.
* Improve `test_gap_limit_maintenance`

* fix: `maintain_gap_limit` target calculation off by one

The target index was calculated incorrectly, causing one extra address to be generated.

See the test added which fails without the fix.
Thats basically what the gap limit is doing already and its not used anyway.
* removed chain storage an related code

* chore: removed fork detector and fork structs (#290)
#295)

This moves:
 - The `logo` folder into `contrib`
 - The protx test data files into `dash/contrib`
…284)

* empty file removed

* quorum module deleted since it is not being used
* removed chain storage an related code

* chore: removed fork detector and fork structs (#290)

* removed filters field from ChainState (ez)
This drops the `FFINetworks` enum which was supposed to be used by the FFI layer to provide multiple networks via bitflag combinations. Since im working away from having multi network support in the wallet this PR removes it and uses `FFINetwork` instead.
…to it (#277)

* created benches with criterion and moved one performance test to it

* deterministic inputs for reproducible benchmarks
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 30, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants