Welcome to this comprehensive guide where we'll explore how to run an ONNX model on Infernet, using our [infernet-container-starter](https://github.com/ritual-net/infernet-container-starter/)
examples repository. This tutorial is designed to give you and end-to-end understanding of how you can run your own
custom pre-trained models, and interact with them on-chain and off-chain.
**Model:** This example uses a pre-trained model to classify iris flowers. The code for the model
is located at our [`simple-ml-models`](https://github.com/ritual-net/simple-ml-models/tree/main/iris_classification) repository.
## Pre-requisites
For this tutorial you'll need to have the following installed.
You should get an output similar to the following:
```json
{
"id": "074b9e98-f1f6-463c-b185-651878f3b4f6"
}
```
Now, you can check the status of the job by running (Make sure job id matches the one
you got from the previous request):
```bash
curl -X GET "http://127.0.0.1:4000/api/jobs?id=074b9e98-f1f6-463c-b185-651878f3b4f6"
```
Should return:
```json
[
{
"id": "074b9e98-f1f6-463c-b185-651878f3b4f6",
"result": {
"container": "onnx-iris",
"output": [
[
[
0.0010151526657864451,
0.014391022734344006,
0.9845937490463257
]
]
]
},
"status": "success"
}
]
```
The `output` corresponds to the model's prediction for each of the classes:
```python
['setosa', 'versicolor', 'virginica']
```
In this case, the model predicts that the input corresponds to the class `virginica`with
a probability of `0.9845937490463257`(~98.5%).
#### Note Regarding the Input
The inputs provided above correspond to an iris flower with the following
characteristics. Refer to the
1. Sepal Length: `5.5cm`
2. Sepal Width: `2.4cm`
3. Petal Length: `3.8cm`
4. Petal Width: `1.1cm`
Putting this input into a vector and scaling it, we get the following scaled input:
```python
[1.0380048, 0.5586108, 1.1037828, 1.712096]
```
Refer
to [this function in the model's repository](https://github.com/ritual-net/simple-ml-models/blob/03ebc6fb15d33efe20b7782505b1a65ce3975222/iris_classification/iris_inference_pytorch.py#L13)
for more information on how the input is scaled.
For more context on the Iris dataset, refer to
the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/iris).
## Making Inference Requests via Contracts (a la Web3 request)
The [contracts](contracts) directory contains a simple forge
project that can be used to interact with the Infernet Node.