Skip to content

Update timm universal (support transformer-style model) #1004

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Dec 19, 2024
Prev Previous commit
Next Next commit
Fix typo error & Update doc
  • Loading branch information
brianhou0208 committed Dec 8, 2024
commit c58b7cf2ef5b0392d51d70ffdce3c8afcbf00bc7
4 changes: 2 additions & 2 deletions docs/encoders_timm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ Total number of encoders: 761 (579+182)

To use following encoders you have to add prefix ``tu-``, e.g. ``tu-adv_inception_v3``

Traditional-Style Models
~~~~~~~~~~~~~~~~~~~~~~~~~
Traditional-Style
~~~~~~~~~~~~~~~~~

These models typically produce feature maps at the following downsampling scales relative to the input resolution: 1/2, 1/4, 1/8, 1/16, and 1/32

Expand Down
2 changes: 1 addition & 1 deletion segmentation_models_pytorch/encoders/timm_universal.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ def __init__(
else:
if "dla" in name:
# For 'dla' models, out_indices starts at 0 and matches the input size.
kwargs["out_indices"] = tuple(range(1, depth + 1))
common_kwargs["out_indices"] = tuple(range(1, depth + 1))

self.model = timm.create_model(
name, **_merge_kwargs_no_duplicates(common_kwargs, kwargs)
Expand Down
Loading