Skip to content

Replicate evaluations

To produce the evaluation results presented here, you can run eva with the settings below.

The .yaml config files for the different benchmark datasets can be found on GitHub. You will need to download the config files and then in the following commands replace <task.yaml> with the name of the config you want to use.

Keep in mind:

  • Some datasets provide automatic download by setting the argument download: true (either modify the .yaml config file or set the environment variable DOWNLOAD=true), while other datasets need to be downloaded manually beforehand. Please review the instructions in the corresponding dataset documentation.
  • The following eva predict_fit commands will store the generated embeddings to the ./data/embeddings directory. To change this location you can alternatively set the EMBEDDINGS_ROOT environment variable.

Pathology FMs

DINO ViT-S16 (random weights)

Evaluating the backbone with randomly initialized weights serves as a baseline to compare the pretrained FMs to an FM that produces embeddings without any prior learning on image tasks. To evaluate, run:

MODEL_NAME="universal/vit_small_patch16_224_random" \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

DINO ViT-S16 (ImageNet)

The next baseline model, uses a pretrained ViT-S16 backbone with ImageNet weights. To evaluate, run:

MODEL_NAME="universal/vit_small_patch16_224_dino" \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

Lunit - DINO ViT-S16 (TCGA) [1]

Lunit, released the weights for a DINO ViT-S16 backbone, pretrained on TCGA data on GitHub. To evaluate, run:

MODEL_NAME=pathology/lunit_vits16
NORMALIZE_MEAN=[0.70322989,0.53606487,0.66096631] \
NORMALIZE_STD=[0.21716536,0.26081574,0.20723464] \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

Lunit - DINO ViT-S8 (TCGA) [1]

MODEL_NAME=pathology/lunit_vits8 \
NORMALIZE_MEAN=[0.70322989,0.53606487,0.66096631] \
NORMALIZE_STD=[0.21716536,0.26081574,0.20723464] \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

Phikon (Owkin) - iBOT ViT-B16 (TCGA) [2]

Owkin released the weights for "Phikon", an FM trained with iBOT on TCGA data, via HuggingFace. To evaluate, run:

MODEL_NAME=pathology/owkin_phikon \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

UNI (MahmoodLab) - DINOv2 ViT-L16 (Mass-100k) [3]

The UNI FM by MahmoodLab is available on HuggingFace. Note that access needs to be requested.

MODEL_NAME=pathology/mahmood_uni \
HF_TOKEN=<your-huggingace-token-for-downloading-the-model> \
IN_FEATURES=1024 \
eva predict_fit --config configs/vision/phikon/offline/<task>.yaml

kaiko.ai - DINO ViT-S16 (TCGA) [4]

To evaluate kaiko.ai's FM with DINO ViT-S16 backbone, pretrained on TCGA data and available on GitHub, run:

MODEL_NAME=pathology/kaiko_vits16 \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

kaiko.ai - DINO ViT-S8 (TCGA) [4]

To evaluate kaiko.ai's FM with DINO ViT-S8 backbone, pretrained on TCGA data and available on GitHub, run:

MODEL_NAME=pathology/kaiko_vits8 \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

kaiko.ai - DINO ViT-B16 (TCGA) [4]

To evaluate kaiko.ai's FM with DINO ViT-B16 backbone, pretrained on TCGA data and available on GitHub, run:

MODEL_NAME=pathology/kaiko_vitb16 \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
IN_FEATURES=768 \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

kaiko.ai - DINO ViT-B8 (TCGA) [4]

To evaluate kaiko.ai's FM with DINO ViT-B8 backbone, pretrained on TCGA data and available on GitHub, run:

MODEL_NAME=pathology/kaiko_vitb16 \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
IN_FEATURES=768 \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

kaiko.ai - DINOv2 ViT-L14 (TCGA) [4]

To evaluate kaiko.ai's FM with DINOv2 ViT-L14 backbone, pretrained on TCGA data and available on GitHub, run:

MODEL_NAME=pathology/kaiko_vitl14 \
NORMALIZE_MEAN=[0.5,0.5,0.5] \
NORMALIZE_STD=[0.5,0.5,0.5] \
IN_FEATURES=1024 \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

H-optimus-0 (Bioptimus) - ViT-G14 [5]

Bioptimus released their H-optimus-0 which was trained on a collection of 500,000 H&E slides. The model weights were released on HuggingFace.

MODEL_NAME=pathology/bioptimus_h_optimus_0 \
NORMALIZE_MEAN=[0.707223, 0.578729, 0.703617] \
NORMALIZE_STD=[0.211883, 0.230117, 0.177517] \
IN_FEATURES=1024 \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

Prov-GigaPath - DINOv2 ViT-G14 [6]

To evaluate the Prov-Gigapath model, available on HuggingFace, run:

MODEL_NAME=pathology/prov_gigapath \
IN_FEATURES=1536 \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

hibou-B (hist.ai) - DINOv2 ViT-B14 (1M Slides) [7]

To evaluate hist.ai's FM with DINOv2 ViT-B14 backbone, pretrained on a proprietary dataset of one million slides, available for download on HuggingFace, run:

MODEL_NAME=pathology/histai_hibou_b \
NORMALIZE_MEAN=[0.7068,0.5755,0.722] \
NORMALIZE_STD=[0.195,0.2316,0.1816] \
IN_FEATURES=768 \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

hibou-L (hist.ai) - DINOv2 ViT-L14 (1M Slides) [7]

To evaluate hist.ai's FM with DINOv2 ViT-L14 backbone, pretrained on a proprietary dataset of one million slides, available for download on HuggingFace, run:

MODEL_NAME=pathology/histai_hibou_l \
NORMALIZE_MEAN=[0.7068,0.5755,0.722] \
NORMALIZE_STD=[0.195,0.2316,0.1816] \
IN_FEATURES=1024 \
eva predict_fit --config configs/vision/pathology/offline/<task>.yaml

References

[1]: Kang, Mingu, et al. "Benchmarking self-supervised learning on diverse pathology datasets." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.

[2]: Filiot, Alexandre, et al. "Scaling self-supervised learning for histopathology with masked image modeling." medRxiv (2023): 2023-07.

[3]: Chen: Chen, Richard J., et al. "A general-purpose self-supervised model for computational pathology." arXiv preprint arXiv:2308.15474 (2023).

[4]: Aben, Nanne, et al. "Towards Large-Scale Training of Pathology Foundation Models." arXiv preprint arXiv:2404.15217 (2024).

[5]: Saillard, et al. "H-optimus-0" https://github.com/bioptimus/releases/tree/main/models/h-optimus/v0 (2024).

[6]: Xu, Hanwen, et al. "A whole-slide foundation model for digital pathology from real-world data." Nature (2024): 1-8.

[7]: Nechaev, Dmitry, Alexey Pchelnikov, and Ekaterina Ivanova. "Hibou: A Family of Foundational Vision Transformers for Pathology." arXiv preprint arXiv:2406.05074 (2024).