-
Notifications
You must be signed in to change notification settings - Fork 22
Update the Etherscan scraper #4789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates the Etherscan scraper to adapt to changes in the Etherscan website structure by modifying HTML parsing selectors and extraction methods.
- Removes website_link field from project information extraction
- Updates HTML selectors to work with the current Etherscan website structure
- Refactors the official_link function to handle multiple social media platforms with specific URL patterns
Reviewed Changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| lib/sanbase/external_services/etherscan/scraper.ex | Updates HTML parsing logic with new selectors and removes website_link extraction |
| test/sanbase/external_services/etherscan/scraper_test.exs | Removes website_link assertion from test expectations |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| |> case do | ||
| nil -> nil | ||
| link -> Floki.attribute(link, "href") |> List.first() | ||
| end |
Copilot
AI
Aug 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Floki.find/2 function returns a list of elements, but Enum.find/2 expects each element to be a complete HTML element. However, the lambda function tries to extract the 'href' attribute from 'link', but 'link' might not be in the expected format. Consider using Floki.attribute/2 on the entire list first, then filtering the URLs.
| end | |
| |> Floki.attribute("href") | |
| |> Enum.find(fn href -> | |
| href && !String.contains?(href, "etherscan-blog") | |
| end) |
| |> String.split() | ||
| |> Enum.find(fn x -> String.starts_with?(x, "Supply") end) | ||
| |> (fn supply -> String.trim(supply, "Supply:") end).() | ||
| |> Decimal.new() |
Copilot
AI
Aug 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function removes the logic that previously parsed 'Supply:' prefix from the total supply string but retains the binary guard. This could cause parsing issues if the input format still contains text prefixes that need to be stripped before converting to Decimal.
| nil | ||
|
|
||
| h4 -> | ||
| Floki.find(h4, "b") |
Copilot
AI
Aug 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using Floki.find/2 on an h4 element to find nested 'b' tags may not work as expected. The h4 variable contains a single HTML element, but Floki.find/2 typically expects an HTML document or fragment. Consider using a different approach to extract the bold text from within the h4 element.
| Floki.find(h4, "b") | |
| Floki.find([h4], "b") |
| | total_supply: total_supply(html) || project_info.total_supply, | ||
| main_contract_address: project_info.main_contract_address || main_contract_address(html), | ||
| token_decimals: project_info.token_decimals || token_decimals(html), | ||
| website_link: project_info.website_link || website_link(html), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is removed as I couldn't find a proper way to extract the website URL from the HTML.
Changes
Ticket
Checklist: