⚡️ Speed up function _is_data_matrix by 22%
#33
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 22% (0.22x) speedup for
_is_data_matrixinpymde/preprocess/generic.py⏱️ Runtime :
114 microseconds→93.6 microseconds(best of135runs)📝 Explanation and details
The optimization reorders the conditional checks to put the faster
isinstance()check first, avoiding expensivesp.issparse()calls in the common case where data is a NumPy array or PyTorch tensor.Key changes:
sp.issparse(data) or isinstance(data, (np.ndarray, torch.Tensor))to checkingisinstance()first with early returnsp.issparse()Why this is faster:
isinstance()is a native Python operation that's very fast for built-in types likenp.ndarrayandtorch.Tensorsp.issparse()is more expensive as it needs to check against multiple scipy sparse matrix types and their inheritance hierarchyPerformance characteristics:
isinstance()check, but this is acceptable since they're likely less commonThe 22% overall speedup suggests the workload is dominated by dense matrix inputs where this optimization provides the greatest benefit.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-_is_data_matrix-mgsr5ql6and push.