Inference API

Base Inference Class

class pathml.inference.InferenceBase

Base class for all ONNX Models. Each transform must operate on a Tile.

abstract F(target)

functional implementation

abstract apply(tile)

modify Tile object in-place

get_model_card()

Returns model card.

reshape(image)

standard reshaping of tile image

set_citation(citation)

Sets the “citation” parameter in the model card.

Parameters:

citation (str) – Citation for the model

set_model_input_notes(note)

Sets the “model_input_notes” parameter in the model card.

Parameters:

note (str) – Comments on the model input

set_model_output_notes(note)

Sets the “model_output_notes” parameter in the model card.

Parameters:

note (str) – Comments on the model output

set_model_type(model_type)

Sets the “model_type” parameter in the model card.

Parameters:

model_type (str) – Type of model, e.g. “segmentation”

set_name(name)

Sets the “name” parameter in the model card.

Parameters:

name (str) – Name for the model

set_notes(note)

Sets the “notes” parameter in the model card.

Parameters:

note (str) – Any extra information you want to put in the model card

set_num_classes(num)

Sets the “num_classes” parameter in the model card.

Parameters:

num (int) – Number of classes your model predicts

Inference Class

class pathml.inference.Inference(model_path=None, input_name='data', num_classes=None, model_type=None, local=True)

Transformation to run inferrence on ONNX model.

Assumptions:
  • The ONNX model has been cleaned by remove_initializer_from_input first

Parameters:
  • model_path (str) – path to ONNX model w/o initializers,

  • input_name (str) – name of the input the ONNX model accepts, default = “data”

  • num_classes (int) – number of classes you are predicting

  • model_type (str) – type of model, e.g. “segmentation”

  • local (bool) – True if the model is stored locally, default = “True”

F(image)

functional implementation

apply(tile)

modify Tile object in-place

inference(image)

HaloAI Inference Class

class pathml.inference.HaloAIInference(model_path=None, input_name='data', num_classes=None, model_type=None, local=True)

Transformation to run inferrence on HALO AI ONNX model.

Assumptions:
  • Assumes that the ONNX model returns a tensor in which there is one prediction map for each class

  • For example, if there are 5 classes, the ONNX model will output a (1, 5, Height, Weight) tensor

  • If you select to argmax the classes, the class assumes a softmax or sigmoid has already been applied

  • HaloAI ONNX models always have 20 class maps so you need to index into the first x maps if you have x classes

Parameters:
  • model_path (str) – path to HaloAI ONNX model w/o initializers,

  • input_name (str) – name of the input the ONNX model accepts, default = “data”

  • num_classes (int) – number of classes you are predicting

  • model_type (str) – type of model, e.g. “segmentation”

  • local (bool) – True if the model is stored locally, default = “True”

F(image)

functional implementation

apply(tile)

modify Tile object in-place

RemoteTestHoverNet Class

class pathml.inference.RemoteTestHoverNet(model_path='temp.onnx', input_name='data', num_classes=5, model_type='Segmentation', local=False)

Transformation to run inference on ONNX model.

Citation for model: Pocock J, Graham S, Vu QD, Jahanifar M, Deshpande S, Hadjigeorghiou G, Shephard A, Bashir RM, Bilal M, Lu W, Epstein D. TIAToolbox as an end-to-end library for advanced tissue image analytics. Communications medicine. 2022 Sep 24;2(1):120.

Parameters:
  • model_path (str) – temp file name to download onnx from huggingface, do not change

  • input_name (str) – name of the input the ONNX model accepts, default = “data”, do not change

  • num_classes (int) – number of classes you are predicting, do not change

  • model_type (str) – type of model, e.g. “segmentation”, do not change

  • local (bool) – True if the model is stored locally, default = “True”, do not change

apply(tile)

modify Tile object in-place

remove()

RemoteMesmer Class

class pathml.inference.RemoteMesmer(model_path='temp.onnx', input_name='data', num_classes=3, model_type='Segmentation', local=False, nuclear_channel=None, cytoplasm_channel=None, image_resolution=0.5, preprocess_kwargs=None, postprocess_kwargs_nuclear=None, postprocess_kwargs_whole_cell=None)

Transformation to run inference on ONNX Mesmer model.

Citation for model: Greenwald NF, Miller G, Moen E, Kong A, Kagel A, Dougherty T, Fullaway CC, McIntosh BJ, Leow KX, Schwartz MS, Pavelchek C. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nature biotechnology. 2022 Apr;40(4):555-65.

Parameters:
  • model_path (str) – temp file name to download onnx from huggingface, do not change

  • input_name (str) – name of the input the ONNX model accepts, default = “data”, do not change

  • num_classes (int) – number of classes you are predicting, do not change

  • model_type (str) – type of model, e.g. “segmentation”, do not change

  • local (bool) – True if the model is stored locally, default = “True”, do not change

  • nuclear_channel (int) – channel that defines cell nucleus

  • cytoplasm_channel (int) – channel that defines cell membrane or cytoplasm

  • image_resolution (float) – pixel resolution of image in microns. Currently only supports 0.5

  • preprocess_kwargs (dict) – keyword arguemnts to pass to pre-processing function

  • postprocess_kwargs_nuclear (dict) – keyword arguments to pass to post-processing function

  • postprocess_kwargs_whole_cell (dict) – keyword arguments to pass to post-processing function

F(image)

functional implementation

apply(tile)

modify Tile object in-place

inference(image)
remove()

Helper functions

pathml.inference.remove_initializer_from_input(model_path, new_path)

Removes initializers from HaloAI ONNX models Taken from https://github.com/microsoft/onnxruntime/blob/main/tools/python/remove_initializer_from_input.py

Parameters:
  • model_path (str) – path to ONNX model,

  • new_path (str) – path to save adjusted model w/o initializers,

Returns:

ONNX model w/o initializers to run inference using PathML

pathml.inference.check_onnx_clean(model_path)

Checks if the model has had it’s initalizers removed from input graph. Adapted from from https://github.com/microsoft/onnxruntime/blob/main/tools/python/remove_initializer_from_input.py

Parameters:

model_path (str) – path to ONNX model,

Returns:

Boolean if there are initializers in input graph.

pathml.inference.convert_pytorch_onnx(model, dummy_tensor, model_name, opset_version=10, input_name='data')

Converts a Pytorch Model to ONNX Adjusted from https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html

You need to define the model class and load the weights before exporting. See URL above for full steps.

Parameters:
  • model_path (torch.nn.Module Model) – Pytorch model to be converted,

  • dummy_tensor (torch.tensor) – dummy input tensor that is an example of what will be passed into the model,

  • model_name (str) – name of ONNX model created with .onnx at the end,

  • opset_version (int) – which opset version you want to use to export

  • input_name (str) – name assigned to dummy_tensor

Returns:

Exports ONNX model converted from Pytorch