Clarify setup steps in README.md
This commit is contained in:
parent
2639ed9e65
commit
aaeac27243
90
README.md
90
README.md
@ -4,27 +4,24 @@ Example Allora network worker node: a node to provide price predictions of ETH.
|
|||||||
|
|
||||||
One of the primary objectives is to demonstrate the utilization of a basic inference model operating within a dedicated container. The purpose is to showcase its seamless integration with the Allora network infrastructure, enabling it to contribute with valuable inferences.
|
One of the primary objectives is to demonstrate the utilization of a basic inference model operating within a dedicated container. The purpose is to showcase its seamless integration with the Allora network infrastructure, enabling it to contribute with valuable inferences.
|
||||||
|
|
||||||
# Components
|
### Components
|
||||||
|
|
||||||
* **head**: An Allora network head node. This is not required for running your node in the Allora network, but it will help for testing your node emulating a network.
|
* **Head**: An Allora network head node. This is not required for running your node in the Allora network, but it will help for testing your node emulating a network.
|
||||||
* **worker**: The node that will respond to inference requests from the Allora network heads.
|
* **Worker**: The node that will respond to inference requests from the Allora network heads.
|
||||||
* **inference**: A container that conducts inferences, maintains the model state, and responds to internal inference requests via a Flask application. The node operates with a basic linear regression model for price predictions.
|
* **Inference**: A container that conducts inferences, maintains the model state, and responds to internal inference requests via a Flask application. The node operates with a basic linear regression model for price predictions.
|
||||||
* **updater**: An example of a cron-like container, that will update the data of the inference node.
|
* **Updater**: An example of a cron-like container designed to update the inference node's data by daily fetching the latest market information from Binance, ensuring the model is kept current with new market trends.
|
||||||
Check the `docker-compose.yml` file (or see docker-compose section below) to see separate components:
|
|
||||||
|
|
||||||
# Inference request flow
|
Check the `docker-compose.yml` file to see the separate components.
|
||||||
|
|
||||||
When a request is made to the head, it relays this request to a number of workers associated with this head. The request specifies a function to run which will execute a wasm code that will call the `main.py` file in the worker. The worker will check the argument (the coin to predict for), and make a request to the `inference` node, and return this value to the `head`, which prepares the response from all of its nodes and sends it back to the requestor.
|
### Inference request flow
|
||||||
|
|
||||||
# Docker setup
|
When a request is made to the head, it relays this request to several workers associated with this head. The request specifies a function to run which will execute a wasm code that will call the `main.py` file in the worker. The worker will check the argument (the coin to predict for), make a request to the `inference` node, and return this value to the `head`, which prepares the response from all of its nodes and sends it back to the requestor.
|
||||||
|
|
||||||
## Structure
|
# Docker Setup
|
||||||
|
|
||||||
- head and worker nodes are built upon `Dockerfile_b7s` file
|
- head and worker nodes are built upon `Dockerfile_b7s` file. This file is functional but simple, so you may want to change it to fit your needs, if you attempt to expand upon the current setup.
|
||||||
- inference and updater nodes are built with `Dockerfile`. This works as an example on how to reuse your current model containers, just by setting up a Flask web application in front with minimal integration work with the Allora network nodes.
|
|
||||||
|
|
||||||
The `Dockerfile_b7s` file is functional but simple, so you may want to change it to fit your needs, if you attempt to expand upon the current setup.
|
|
||||||
For further details, please check the base repo [allora-inference-base](https://github.com/allora-network/allora-inference-base).
|
For further details, please check the base repo [allora-inference-base](https://github.com/allora-network/allora-inference-base).
|
||||||
|
- inference and updater nodes are built with `Dockerfile`. This works as an example of how to reuse your current model containers, just by setting up a Flask web application in front with minimal integration work with the Allora network nodes.
|
||||||
|
|
||||||
### Application path
|
### Application path
|
||||||
|
|
||||||
@ -35,46 +32,45 @@ By default, the application runtime lives under `/app`, as well as the Python co
|
|||||||
It is recommended to mount `/data` as a volume, to persist the node databases of peers, functions, etc. which are defined in the flags passed to the worker.
|
It is recommended to mount `/data` as a volume, to persist the node databases of peers, functions, etc. which are defined in the flags passed to the worker.
|
||||||
You can create this folder e.g. `mkdir data` in the repo root directory.
|
You can create this folder e.g. `mkdir data` in the repo root directory.
|
||||||
|
|
||||||
It is recommended to set up two different `/data` volumes. It is suggested to use `data-worker` for the worker, `data-head` for the head.
|
It is recommended to set up two different `/data` volumes. It is suggested to use `worker-data` for the worker, `head-data` for the head.
|
||||||
|
|
||||||
Troubleshooting: A conflict may happen between the uid/gid of the user inside the container(1001) with the permissions of your own user.
|
Troubleshooting: A conflict may happen between the uid/gid of the user inside the container(1001) with the permissions of your own user.
|
||||||
To make the container user have permissions to write on the `/data` volume, you may need to set the UID/GID from the user running the container. You can get those in linux/osx via `id -u` and `id -g`.
|
To make the container user have permissions to write on the `/data` volume, you may need to set the UID/GID from the user running the container. You can get those in linux/osx via `id -u` and `id -g`.
|
||||||
The current `docker-compose.yml` file shows the `worker` service setting UID and GID. As well, the `Dockerfile` also sets UID/GID values.
|
The current `docker-compose.yml` file shows the `worker` service setting UID and GID. As well, the `Dockerfile` also sets UID/GID values.
|
||||||
|
|
||||||
|
|
||||||
# docker-compose
|
# Docker-Compose Setup
|
||||||
A full working example is provided in `docker-compose`.
|
A full working example is provided in the `docker-compose.yml` file.
|
||||||
|
|
||||||
## Structure
|
## Setup
|
||||||
There is a docker-compose.yml provided that sets up one head node, one worker node, one inference node, and an updater node.
|
1. **Generate keys**: Create a set of keys for your head and worker nodes. These keys will be used in the configuration of the head and worker nodes.
|
||||||
Please find details about options on the [allora-inference-base](https://github.com/allora-network/allora-inference-base) repo.
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
Ensure the following dependencies are in place before proceeding:
|
|
||||||
|
|
||||||
- **Docker Image**: Have an available image of the `allora-inference-base`, and reference it as a base on the `FROM` of the `Dockerfile_b7s` file.
|
|
||||||
- **Keys Setup**: Create a set of keys for your head and worker and use them in the head and worker configuration. If no keys are specified in the volumes, new keys are created. However, the worker will need to specify the `peer_id` of the head for defining it as a `BOOT_NODES`.
|
|
||||||
|
|
||||||
## Connecting to the Allora network
|
|
||||||
In order to connect the an Allora network to provide inferences, both the head and the worker need to register against it. More details on [allora-inference-base](https://github.com/allora-network/allora-inference-base) repo.
|
|
||||||
The following optional flags are used in the `command:` section of the `docker-compose.yml` file to define the connectivity with the Allora network.
|
|
||||||
|
|
||||||
|
**Create head keys:**
|
||||||
```
|
```
|
||||||
--allora-chain-key-name=index-provider # your local key name in your keyring
|
docker run -it --entrypoint=bash -v ./head-data:/data 696230526504.dkr.ecr.us-east-1.amazonaws.com/allora-inference-base:latest -c "mkdir -p /data/keys && (cd /data/keys && allora-keys)"
|
||||||
--allora-chain-restore-mnemonic='pet sock excess ...' # your node's Allora address mnemonic
|
|
||||||
--allora-node-rpc-address= # RPC address of a node in the chain
|
|
||||||
--allora-chain-topic-id= # The topic id from the chain that you want to provide predictions for
|
|
||||||
```
|
```
|
||||||
In order for the nodes to register with the chain, a funded address is needed first.
|
|
||||||
If these flags are not provided, the nodes will not register to the appchain and will not attempt to connect to the appchain.
|
|
||||||
|
|
||||||
# Setup
|
**Create worker keys**
|
||||||
Once this is set up, run `docker compose up head worker inference`
|
```
|
||||||
|
docker run -it --entrypoint=bash -v ./worker-data:/data 696230526504.dkr.ecr.us-east-1.amazonaws.com/allora-inference-base:latest -c "mkdir -p /data/keys && (cd /data/keys && allora-keys)"
|
||||||
|
```
|
||||||
|
|
||||||
|
Important note: If no keys are specified in the volumes, new keys will be automatically created inside `head-data/keys` and `worker-data/keys` when first running step 4.
|
||||||
|
|
||||||
|
3. **Connect the worker node to the head node**:
|
||||||
|
|
||||||
|
At this step, both worker and head nodes identities are generated inside `head-data/keys` and `worker-data/keys`.
|
||||||
|
To instruct the worker node to connect to the head node:
|
||||||
|
- get the head node's peer_id specified in the `head-data/keys/identity` file
|
||||||
|
- use the peer_id to replace the `head-id` placeholder value specified inside the docker-compose.yml file when running the worker service: `--boot-nodes=/ip4/172.22.0.100/tcp/9010/p2p/{head-id}`
|
||||||
|
|
||||||
|
4. **Run setup**
|
||||||
|
Once all the above is set up, run `docker compose up head worker inference`
|
||||||
This will bring up the head, the worker and the inference nodes (which will run an initial update). The `updater` node is a companion for updating the inference node state and it's meant to hit the /update endpoint on the inference service. It is expected to run periodically, being crucial for maintaining the accuracy of the inferences.
|
This will bring up the head, the worker and the inference nodes (which will run an initial update). The `updater` node is a companion for updating the inference node state and it's meant to hit the /update endpoint on the inference service. It is expected to run periodically, being crucial for maintaining the accuracy of the inferences.
|
||||||
|
|
||||||
## Testing docker-compose setup
|
## Testing docker-compose setup
|
||||||
|
|
||||||
The head node has the only open port, and responds to requests in port 6000.
|
The head node has the only open port and responds to requests in port 6000.
|
||||||
|
|
||||||
Example request:
|
Example request:
|
||||||
```
|
```
|
||||||
@ -106,8 +102,8 @@ Response:
|
|||||||
{"code":"200","request_id":"e3daeda0-c849-4b68-b21d-8f51e42bb9d3","results":[{"result":{"stdout":"{\"value\":\"2564.250058819078\"}\n\n\n","stderr":"","exit_code":0},"peers":["12D3KooWG8dHctRt6ctakJfG5masTnLaKM6xkudoR5BxLDRSrgVt"],"frequency":100}],"cluster":{"peers":["12D3KooWG8dHctRt6ctakJfG5masTnLaKM6xkudoR5BxLDRSrgVt"]}}
|
{"code":"200","request_id":"e3daeda0-c849-4b68-b21d-8f51e42bb9d3","results":[{"result":{"stdout":"{\"value\":\"2564.250058819078\"}\n\n\n","stderr":"","exit_code":0},"peers":["12D3KooWG8dHctRt6ctakJfG5masTnLaKM6xkudoR5BxLDRSrgVt"],"frequency":100}],"cluster":{"peers":["12D3KooWG8dHctRt6ctakJfG5masTnLaKM6xkudoR5BxLDRSrgVt"]}}
|
||||||
```
|
```
|
||||||
|
|
||||||
# Testing inference only
|
## Testing inference only
|
||||||
This setup allows to develop your model without need for bringing up the head and worker.
|
This setup allows to develop your model without the need for bringing up the head and worker.
|
||||||
To only test the inference model, you can just:
|
To only test the inference model, you can just:
|
||||||
- In docker-compose.yml, under `inference` service, uncomment the lines:
|
- In docker-compose.yml, under `inference` service, uncomment the lines:
|
||||||
```
|
```
|
||||||
@ -126,3 +122,15 @@ To only test the inference model, you can just:
|
|||||||
0
|
0
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Connecting to the Allora network
|
||||||
|
To connect to the Allora network to provide inferences, both the head and the worker need to register against it. More details on [allora-inference-base](https://github.com/allora-network/allora-inference-base) repo.
|
||||||
|
The following optional flags are used in the `command:` section of the `docker-compose.yml` file to define the connectivity with the Allora network.
|
||||||
|
|
||||||
|
```
|
||||||
|
--allora-chain-key-name=index-provider # your local key name in your keyring
|
||||||
|
--allora-chain-restore-mnemonic='pet sock excess ...' # your node's Allora address mnemonic
|
||||||
|
--allora-node-rpc-address= # RPC address of a node in the chain
|
||||||
|
--allora-chain-topic-id= # The topic id from the chain that you want to provide predictions for
|
||||||
|
```
|
||||||
|
For the nodes to register with the chain, a funded address is needed first.
|
||||||
|
If these flags are not provided, the nodes will not register to the appchain and will not attempt to connect to the appchain.
|
||||||
|
@ -50,7 +50,7 @@ services:
|
|||||||
allora-node --role=worker --peer-db=/data/peerdb --function-db=/data/function-db \
|
allora-node --role=worker --peer-db=/data/peerdb --function-db=/data/function-db \
|
||||||
--runtime-path=/app/runtime --runtime-cli=bls-runtime --workspace=/data/workspace \
|
--runtime-path=/app/runtime --runtime-cli=bls-runtime --workspace=/data/workspace \
|
||||||
--private-key=/data/keys/priv.bin --log-level=debug --port=9011 \
|
--private-key=/data/keys/priv.bin --log-level=debug --port=9011 \
|
||||||
--boot-nodes=/ip4/172.22.0.100/tcp/9010/p2p/12D3KooWSBJucc8S3YdLH8n5UqTQpSbNjwEjcnYCW8zWuPhDAFHY \
|
--boot-nodes=/ip4/172.22.0.100/tcp/9010/p2p/head-id \
|
||||||
--topic=1
|
--topic=1
|
||||||
volumes:
|
volumes:
|
||||||
- ./worker-data:/data
|
- ./worker-data:/data
|
||||||
@ -65,7 +65,6 @@ services:
|
|||||||
ipv4_address: 172.22.0.10
|
ipv4_address: 172.22.0.10
|
||||||
|
|
||||||
head:
|
head:
|
||||||
# 12D3KooWSBJucc8S3YdLH8n5UqTQpSbNjwEjcnYCW8zWuPhDAFHY
|
|
||||||
container_name: head-basic-eth-pred
|
container_name: head-basic-eth-pred
|
||||||
image: 696230526504.dkr.ecr.us-east-1.amazonaws.com/allora-inference-base-head:latest
|
image: 696230526504.dkr.ecr.us-east-1.amazonaws.com/allora-inference-base-head:latest
|
||||||
environment:
|
environment:
|
||||||
|
Loading…
Reference in New Issue
Block a user