Skip to content

Conversation

@eseiler
Copy link
Member

@eseiler eseiler commented Jan 26, 2024

The capacity remains the same after calling clear.

I also renamed kmers to local_kmers in loop_over_children because it might be shadowing kmers.

@vercel
Copy link

vercel bot commented Jan 26, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
hibf ✅ Ready (Inspect) Visit Preview Jan 26, 2024 4:24pm

@eseiler eseiler requested a review from smehringer January 26, 2024 16:18
@seqan-actions seqan-actions added lint [INTERNAL] used for linting and removed lint [INTERNAL] used for linting labels Jan 26, 2024
@codecov
Copy link

codecov bot commented Jan 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.63%. Comparing base (dbbfb3d) to head (abad908).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #192   +/-   ##
=======================================
  Coverage   99.63%   99.63%           
=======================================
  Files          51       51           
  Lines        1930     1930           
  Branches        5        5           
=======================================
  Hits         1923     1923           
  Misses          7        7           
Flag Coverage Δ
?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@smehringer
Copy link
Member

I thought we did this deliberately because it is faster if kmers doesn't have to reallocate memory Everytime?

@eseiler
Copy link
Member Author

eseiler commented Jan 27, 2024

Usually yes, in this particular case we wanted to reduce the memory consumption, because we do the recursion afterwards. We didn't want to keep the kmers around, because they are not needed anymore. And clear doesn't deallocate.

I can also run raptor with and without that change and check how it affects timings and ram.

@smehringer
Copy link
Member

I think this change needs at least one benchmark on refseq to check if we don't downgrade the performance too much.

@eseiler
Copy link
Member Author

eseiler commented Feb 29, 2024

I think this change needs at least one benchmark on refseq to check if we don't downgrade the performance too much.

Yes, I'll run refseq with and without. I'd figure this is more or less i/o limited, but let's see :)

@eseiler eseiler force-pushed the fix/actual_mem_reduction branch from 4f1d194 to abad908 Compare October 7, 2024 12:08
@seqan-actions seqan-actions added lint [INTERNAL] used for linting and removed lint [INTERNAL] used for linting labels Oct 7, 2024
@seqan-actions
Copy link
Member

Documentation preview available at https://docs.seqan.de/preview/seqan/hibf/192

@smehringer
Copy link
Member

on 40k Refseq genomes with tmax 256 there is no change in RAM usage with this patch

@eseiler
Copy link
Member Author

eseiler commented Oct 11, 2024

on 40k Refseq genomes with tmax 256 there is no change in RAM usage with this patch

I guess that makes sense because kmers only holds the kmers of the maximum bin (i. e. a single bin)

Then the only (probably) alternative is, to read files multiple times

@smehringer
Copy link
Member

or use less threads :D

@eseiler
Copy link
Member Author

eseiler commented Oct 11, 2024

For this PR, we could also think about some refactoring.

For example, we could do something like

auto & ibf = hibf.ibf_vector[ibf_pos];

{
    robin_hood::unordered_flat_set<uint64_t> kmers{};

    auto initialise_max_bin_kmers = [&]() -> size_t
    {
        if (current_node.max_bin_is_merged())
        {
            // recursively initialize favourite child first
            technical_bin_to_ibf_id[current_node.max_bin_index] =
                hierarchical_build(hibf,
                                    kmers,
                                    current_node.children[current_node.favourite_child_idx.value()],
                                    data,
                                    false);
            return 1;
        }
        else // max bin is not a merged bin
        {
            // we assume that the max record is at the beginning of the list of remaining records.
            auto const & record = current_node.remaining_records[0];
            build::compute_kmers(kmers, data, record);
            build::update_user_bins(technical_bin_to_user_bin_id, record);
            return record.number_of_technical_bins;
        }
    };
    // initialize lower level IBF
    size_t const max_bin_tbs = initialise_max_bin_kmers();
    ibf = construct_ibf(parent_kmers, kmers, max_bin_tbs, current_node, data, is_root);
}

// parse all other children (merged bins) of the current ibf
auto loop_over_children = [&]()
{
    /* ... */
};
   
loop_over_children();

robin_hood::unordered_flat_set<uint64_t> kmers{};

// If max bin was a merged bin, process all remaining records, otherwise the first one has already been processed
size_t const start{(current_node.max_bin_is_merged()) ? 0u : 1u};
for (size_t i = start; i < current_node.remaining_records.size(); ++i)
{

We put the kmers into a scope for the first use. Then we do loop_over_children.
And then we have a new kmers set that's used for filling the current IBF.

Or something else...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants