You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This module contains model wrappers, dataloaders, tests and all the ingredients needed to evaluate your single-label image classification models.
4
+
In particular this module allows you to evaluate your model against the following criteria:
5
+
6
+
- Performance on images with different basic image attributes.
7
+
- Performance on images with various metadata from the datasets.
8
+
- Robustness against image perturbations like blurring, resizing, recoloring (performed by `opencv`: https://github.com/opencv/opencv)
9
+
10
+
## Wrapped Datasets
11
+
12
+
-[geirhos_conflict_stimuli](https://www.tensorflow.org/datasets/catalog/geirhos_conflict_stimuli) through Tensorflow Datasets
13
+
-[CIFAR100](https://huggingface.co/datasets/uoft-cs/cifar100) through Hugging Face
14
+
-[Skin cancer](https://huggingface.co/datasets/marmal88/skin_cancer) through Hugging Face
15
+
16
+
17
+
## Scan and Supported Classification
18
+
19
+
Once the model and dataloader (`dl`) are wrapped, you can scan the model with the scan API in Giskard vision core:
20
+
21
+
```python
22
+
from giskard_vision.core.scanner import scan
23
+
24
+
results = scan(model, dl)
25
+
```
26
+
27
+
It adapts the [scan API in Giskard Python library](https://github.com/Giskard-AI/giskard#2--scan-your-model-for-issues) to magically scan the vision model with the dataloader.
28
+
29
+
Currently, due to the constraint of the scan API, we support a subset of image classification tasks:
30
+
31
+
-[x] Multiclass and single label
32
+
-[ ] Multiclass and multi-label
33
+
34
+
We will be working to remove such limit for the scan.
- FFHQ (only meta data): https://poc-face-aligment.s3.eu-north-1.amazonaws.com/ffhq/json.zip
24
24
25
-
## Metrics
25
+
## Scan and Metrics
26
+
27
+
Once the model and dataloader (`dl`) are wrapped, you can scan the model with the scan API in Giskard vision core:
28
+
29
+
```python
30
+
from giskard_vision.core.scanner import scan
31
+
32
+
results = scan(model, dl)
33
+
```
34
+
35
+
It adapts the [scan API in Giskard Python library](https://github.com/Giskard-AI/giskard#2--scan-your-model-for-issues) to magically scan the vision model with the dataloader. The considered metrics are:
-[300W](https://ibug.doc.ic.ac.uk/resources/300-W/), using the boundary box around all face landmarks
14
+
-[ffhq](https://github.com/DCGM/ffhq-features-dataset), using the boundary box around all face landmarks
15
+
-[Living room passes](https://huggingface.co/datasets/Nfiniteai/living-room-passes) through Hugging Face
16
+
17
+
## Scan and Metrics
18
+
19
+
Once the model and dataloader (`dl`) are wrapped, you can scan the model with the scan API in Giskard vision core:
20
+
21
+
```python
22
+
from giskard_vision.core.scanner import scan
23
+
24
+
results = scan(model, dl)
25
+
```
26
+
27
+
It adapts the [scan API in Giskard Python library](https://github.com/Giskard-AI/giskard#2--scan-your-model-for-issues) to magically scan the vision model with the dataloader. The considered metric is:
28
+
29
+
-[x] Intersection over Union (IoU)
30
+
31
+
Currently, we only support one object both in the model prediction and ground truth, due to the constraint of the scan API. We will be working to remove such limit.
0 commit comments