Skip to content

Conversation

@wash2
Copy link

@wash2 wash2 commented Aug 18, 2025

Rendering currently panics for SVGs with extremely large filters, like https://github.com/Davidoc26/wallpaper-selector/blob/main/data/icons/io.github.davidoc26.wallpaper_selector.svg.

I am not terribly familiar with this project, but it seems that while the layer will be shrunk if it is too large, the filters are left unchanged, which eventually causes a source / dest size mismatch and panics when they are asserted to be the same. This change shrinks the filter via the transform as well, if necessary, which resolves the issue when I test it and matches the correct rendering.

@wash2 wash2 marked this pull request as draft August 18, 2025 21:31
@wash2 wash2 marked this pull request as ready for review August 18, 2025 23:13
@LaurenzV
Copy link
Collaborator

Sorry for the lack of response, I'll try to take a closer look soon.

@wash2
Copy link
Author

wash2 commented Aug 25, 2025

Thanks!

Copy link
Collaborator

@LaurenzV LaurenzV left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry again about the delay. It's a bit hard for me to judge whether it's completely correct as I'm not familiar with that part of the code, but given that Chrome can also render the SVG I believe it should be fine to merge. Just two small comments.

let s_w = shrunk.width() as f32 / tf_rect.to_int_rect().width() as f32;
let s_h = shrunk.height() as f32 / tf_rect.to_int_rect().height() as f32;
transform.pre_scale(s_w, s_h)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't it also happen that the shrunken bbox has a different x/y starting point? In which case we probably also have to apply a translational transform? Or am I missing something?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that pre_scale will apply the scaling to any translation that follows as well, unless I've misunderstood what you mean. Maybe an extra test could be good 😅

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's say that the original bbox was top-left: (-30, -30) bottom-right: (120, 120) and is then shrunk to top-left: (0, 0) bottom-right: (75, 75). In this case, you will correctly apply a scale of (0.5, 0.5) to reduce the width from 150 to 75, but you also need to translate by (30, 30) to move the origin to (0, 0), or am I missing something?

Would indeed be good to have a test, but not sure how easy it is to construct one.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the rendering of firefox and chrome, I can see that the huge region example gaussian blur is much more noticeable than the output, so it is still not 100% correct, I guess, but it doesn't panic at least.

In addition, if i make a new test that shifts the blur to an x and y of 50, the image is sharp on the top left corner and the result is clipped. It is not rendered any differently by resvg though. I think that the primitive transform handling in apply_inner may not be correct, but I'll have to look more into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants