Cleanup and update readme
This commit is contained in:
parent
9db05c7140
commit
9198fd4fb1
1
.gitignore
vendored
1
.gitignore
vendored
@ -9,3 +9,4 @@ inference-data
|
||||
|
||||
config.json
|
||||
env
|
||||
env_file
|
191
README.md
191
README.md
@ -1,150 +1,75 @@
|
||||
# Basic ETH price prediction node
|
||||
# Basic ETH Price Prediction Node
|
||||
|
||||
Example Allora network worker node: a node to provide price predictions of ETH.
|
||||
This repository provides an example Allora network worker node, designed to offer price predictions for ETH. The primary objective is to demonstrate the use of a basic inference model running within a dedicated container, showcasing its integration with the Allora network infrastructure to contribute valuable inferences.
|
||||
|
||||
One of the primary objectives is to demonstrate the utilization of a basic inference model operating within a dedicated container. The purpose is to showcase its seamless integration with the Allora network infrastructure, enabling it to contribute with valuable inferences.
|
||||
## Components
|
||||
|
||||
### Components
|
||||
- **Worker**: The node that publishes inferences to the Allora chain.
|
||||
- **Inference**: A container that conducts inferences, maintains the model state, and responds to internal inference requests via a Flask application. This node operates with a basic linear regression model for price predictions.
|
||||
- **Updater**: A cron-like container designed to update the inference node's data by daily fetching the latest market information from Binance, ensuring the model stays current with new market trends.
|
||||
|
||||
* **Head**: An Allora network head node. This is not required for running your node in the Allora network, but it will help for testing your node emulating a network.
|
||||
* **Worker**: The node that will respond to inference requests from the Allora network heads.
|
||||
* **Inference**: A container that conducts inferences, maintains the model state, and responds to internal inference requests via a Flask application. The node operates with a basic linear regression model for price predictions.
|
||||
* **Updater**: An example of a cron-like container designed to update the inference node's data by daily fetching the latest market information from Binance, ensuring the model is kept current with new market trends.
|
||||
Check the `docker-compose.yml` file for the detailed setup of each component.
|
||||
|
||||
Check the `docker-compose.yml` file to see the separate components.
|
||||
## Docker-Compose Setup
|
||||
|
||||
### Inference request flow
|
||||
A complete working example is provided in the `docker-compose.yml` file.
|
||||
|
||||
When a request is made to the head, it relays this request to several workers associated with this head. The request specifies a function to run which will execute a wasm code that will call the `main.py` file in the worker. The worker will check the argument (the coin to predict for), make a request to the `inference` node, and return this value to the `head`, which prepares the response from all of its nodes and sends it back to the requestor.
|
||||
### Steps to Setup
|
||||
|
||||
# Docker Setup
|
||||
1. **Clone the Repository**
|
||||
2. **Copy and Populate Configuration**
|
||||
Copy the example configuration file and populate it with your variables:
|
||||
```sh
|
||||
cp config.example.json config.json
|
||||
```
|
||||
|
||||
- head and worker nodes are built upon `Dockerfile_b7s` file. This file is functional but simple, so you may want to change it to fit your needs, if you attempt to expand upon the current setup.
|
||||
For further details, please check the base repo [allora-inference-base](https://github.com/allora-network/allora-inference-base).
|
||||
- inference and updater nodes are built with `Dockerfile`. This works as an example of how to reuse your current model containers, just by setting up a Flask web application in front with minimal integration work with the Allora network nodes.
|
||||
3. **Initialize Worker**
|
||||
Run the following commands from the project's root directory to initialize the worker:
|
||||
```sh
|
||||
chmod +x init.docker
|
||||
./init.docker
|
||||
```
|
||||
These commands will:
|
||||
- Automatically create Allora keys for your worker.
|
||||
- Export the needed variables from the created account to be used by the worker node, bundle them with your provided `config.json`, and pass them to the node as environment variables.
|
||||
|
||||
### Application path
|
||||
4. **Faucet Your Worker Node**
|
||||
You can find the offchain worker node's address in `./worker-data/env_file` under `ALLORA_OFFCHAIN_ACCOUNT_ADDRESS`. Request some tokens from the faucet to register your worker.
|
||||
|
||||
By default, the application runtime lives under `/app`, as well as the Python code the worker provides (`/app/main.py`). The current user needs to have write permissions on `/app/runtime`.
|
||||
5. **Start the Services**
|
||||
Run the following command to start the worker node, inference, and updater nodes:
|
||||
```sh
|
||||
docker compose up --build
|
||||
```
|
||||
To confirm that the worker successfully sends the inferences to the chain, look for the following log:
|
||||
```
|
||||
{"level":"debug","msg":"Send Worker Data to chain","txHash":<tx-hash>,"time":<timestamp>,"message":"Success"}
|
||||
```
|
||||
|
||||
### Data volume and permissions
|
||||
## Testing Inference Only
|
||||
|
||||
It is recommended to mount the `/worker-data` and `/head-data` folders as volumes, to persist the node databases of peers, functions, etc. which are defined in the flags passed to the worker.
|
||||
You can create two different `/data` volumes. It is suggested to use `worker-data` for the worker, `head-data` for the head:
|
||||
`mkdir worker-data && mkdir heaed-data`.
|
||||
This setup allows you to develop your model without the need to bring up the head and worker. To test the inference model only:
|
||||
|
||||
Troubleshooting: A conflict may happen between the uid/gid of the user inside the container(1001) with the permissions of your own user.
|
||||
To make the container user have permissions to write on the `/data` volume, you may need to set the UID/GID from the user running the container. You can get those in linux/osx via `id -u` and `id -g`.
|
||||
The current `docker-compose.yml` file shows the `worker` service setting UID and GID. As well, the `Dockerfile` also sets UID/GID values.
|
||||
1. Run the following command to start the inference node:
|
||||
```sh
|
||||
docker compose up --build inference
|
||||
```
|
||||
Wait for the initial data load.
|
||||
|
||||
|
||||
# Docker-Compose Setup
|
||||
A full working example is provided in the `docker-compose.yml` file.
|
||||
|
||||
1. **Generate keys**: Create a set of keys for your head and worker nodes. These keys will be used in the configuration of the head and worker nodes.
|
||||
|
||||
**Create head keys:**
|
||||
```
|
||||
docker run -it --entrypoint=bash -v ./head-data:/data alloranetwork/allora-inference-base:latest -c "mkdir -p /data/keys && (cd /data/keys && allora-keys)"
|
||||
```
|
||||
|
||||
**Create worker keys**
|
||||
```
|
||||
docker run -it --entrypoint=bash -v ./worker-data:/data alloranetwork/allora-inference-base:latest -c "mkdir -p /data/keys && (cd /data/keys && allora-keys)"
|
||||
```
|
||||
|
||||
Important note: If no keys are specified in the volumes, new keys will be automatically created inside `head-data/keys` and `worker-data/keys` when first running step 3.
|
||||
|
||||
2. **Connect the worker node to the head node**:
|
||||
|
||||
At this step, both worker and head nodes identities are generated inside `head-data/keys` and `worker-data/keys`.
|
||||
To instruct the worker node to connect to the head node:
|
||||
- run `cat head-data/keys/identity` to extract the head node's peer_id specified in the `head-data/keys/identity`
|
||||
- use the printed peer_id to replace the `{HEAD-ID}` placeholder value specified inside the docker-compose.yml file when running the worker service: `--boot-nodes=/ip4/172.22.0.100/tcp/9010/p2p/{HEAD-ID}`
|
||||
|
||||
3. **Run setup**
|
||||
Once all the above is set up, run `docker compose up --build`
|
||||
This will bring up the head, the worker and the inference nodes (which will run an initial update). The `updater` node is a companion for updating the inference node state and it's meant to hit the /update endpoint on the inference service. It is expected to run periodically, being crucial for maintaining the accuracy of the inferences.
|
||||
|
||||
## Testing docker-compose setup
|
||||
|
||||
The head node has the only open port and responds to requests in port 6000.
|
||||
|
||||
Example request:
|
||||
```
|
||||
curl --location 'http://127.0.0.1:6000/api/v1/functions/execute' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"function_id": "bafybeigpiwl3o73zvvl6dxdqu7zqcub5mhg65jiky2xqb4rdhfmikswzqm",
|
||||
"method": "allora-inference-function.wasm",
|
||||
"parameters": null,
|
||||
"topic": "1",
|
||||
"config": {
|
||||
"env_vars": [
|
||||
{
|
||||
"name": "BLS_REQUEST_PATH",
|
||||
"value": "/api"
|
||||
},
|
||||
{
|
||||
"name": "ALLORA_ARG_PARAMS",
|
||||
"value": "ETH"
|
||||
}
|
||||
],
|
||||
"number_of_nodes": -1,
|
||||
"timeout": 5
|
||||
}
|
||||
}'
|
||||
```
|
||||
Response:
|
||||
```
|
||||
{
|
||||
"code": "200",
|
||||
"request_id": "14be2a82-432c-4bae-bc1a-20c7627e0ebc",
|
||||
"results": [
|
||||
{
|
||||
"result": {
|
||||
"stdout": "{\"infererValue\": \"2946.450220116334\"}\n\n",
|
||||
"stderr": "",
|
||||
"exit_code": 0
|
||||
},
|
||||
"peers": [
|
||||
"12D3KooWGHYZAR5YBgJHvG8o8GxBJpV5ANLUfL1UReX5Lizg5iKf"
|
||||
],
|
||||
"frequency": 100
|
||||
}
|
||||
],
|
||||
"cluster": {
|
||||
"peers": [
|
||||
"12D3KooWGHYZAR5YBgJHvG8o8GxBJpV5ANLUfL1UReX5Lizg5iKf"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing inference only
|
||||
This setup allows to develop your model without the need for bringing up the head and worker.
|
||||
To only test the inference model, you can just:
|
||||
- Run `docker compose up --build inference` and wait for the initial data load.
|
||||
- Requests can now be sent, e.g. request ETH price inferences as in:
|
||||
```
|
||||
$ curl http://127.0.0.1:8000/inference/ETH
|
||||
2. Send requests to the inference model. For example, request ETH price inferences:
|
||||
```sh
|
||||
curl http://127.0.0.1:8000/inference/ETH
|
||||
```
|
||||
Expected response:
|
||||
```json
|
||||
{"value":"2564.021586281073"}
|
||||
```
|
||||
or update the node's internal state (download pricing data, train and update the model):
|
||||
```
|
||||
$ curl http://127.0.0.1:8000/update
|
||||
```
|
||||
|
||||
3. Update the node's internal state (download pricing data, train, and update the model):
|
||||
```sh
|
||||
curl http://127.0.0.1:8000/update
|
||||
```
|
||||
Expected response:
|
||||
```sh
|
||||
0
|
||||
```
|
||||
|
||||
## Connecting to the Allora network
|
||||
To connect to the Allora network to provide inferences, both the head and the worker need to register against it. More details on [allora-inference-base](https://github.com/allora-network/allora-inference-base) repo.
|
||||
The following optional flags are used in the `command:` section of the `docker-compose.yml` file to define the connectivity with the Allora network.
|
||||
|
||||
```
|
||||
--allora-chain-key-name=index-provider # your local key name in your keyring
|
||||
--allora-chain-restore-mnemonic='pet sock excess ...' # your node's Allora address mnemonic
|
||||
--allora-node-rpc-address= # RPC address of a node in the chain
|
||||
--allora-chain-topic-id= # The topic id from the chain that you want to provide predictions for
|
||||
```
|
||||
For the nodes to register with the chain, a funded address is needed first.
|
||||
If these flags are not provided, the nodes will not register to the appchain and will not attempt to connect to the appchain.
|
||||
```
|
||||
|
@ -33,16 +33,16 @@ services:
|
||||
inference:
|
||||
condition: service_healthy
|
||||
|
||||
node:
|
||||
container_name: offchain_node
|
||||
image: allora-offchain-node:latest
|
||||
worker:
|
||||
container_name: worker
|
||||
image: alloranetwork/allora-offchain-node:latest
|
||||
volumes:
|
||||
- ./offchain-node-data:/data
|
||||
- ./worker-data:/data
|
||||
depends_on:
|
||||
inference:
|
||||
condition: service_healthy
|
||||
env_file:
|
||||
- ./offchain-node-data/env_file
|
||||
- ./worker-data/env_file
|
||||
|
||||
networks:
|
||||
eth-model-local:
|
||||
@ -53,4 +53,4 @@ networks:
|
||||
|
||||
volumes:
|
||||
inference-data:
|
||||
offchain-node-data:
|
||||
worker-data:
|
||||
|
29
init.docker
Executable file
29
init.docker
Executable file
@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
if [ ! -f config.json ]; then
|
||||
echo "Error: config.json file not found, please provide one"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
nodeName=$(jq -r '.wallet.addressKeyName' config.json)
|
||||
if [ -z "$nodeName" ]; then
|
||||
echo "No name was provided for the node, please provide value for wallet.addressKeyName in the config.json"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f ./worker-data/env_file ]; then
|
||||
echo "ENV_LOADED=false" > ./worker-data/env_file
|
||||
fi
|
||||
|
||||
ENV_LOADED=$(grep '^ENV_LOADED=' ./worker-data/env_file | cut -d '=' -f 2)
|
||||
if [ "$ENV_LOADED" = "false" ]; then
|
||||
json_content=$(cat ./config.json)
|
||||
stringified_json=$(echo "$json_content" | jq -c .)
|
||||
|
||||
docker run -it --entrypoint=bash -v $(pwd)/worker-data:/data -e NAME="${nodeName}" -e ALLORA_OFFCHAIN_NODE_CONFIG_JSON="${stringified_json}" alloranetwork/allora-chain:latest -c "bash /data/scripts/init.sh"
|
||||
echo "config.json saved to ./worker-data/env_file"
|
||||
else
|
||||
echo "config.json is already loaded, skipping the operation. You can set ENV_LOADED variable to false in ./worker-data/env_file to reload the config.json"
|
||||
fi
|
@ -1,29 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
if [ ! -f config.json ]; then
|
||||
echo "Error: config.json file not found, please provide one"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
nodeName=$(jq -r '.wallet.addressKeyName' config.json)
|
||||
if [ -z "$nodeName" ]; then
|
||||
echo "No name was provided for the node, please provide value for wallet.addressKeyName in the config.json"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f ./offchain-node-data/env_file ]; then
|
||||
echo "ENV_LOADED=false" > ./offchain-node-data/env_file
|
||||
fi
|
||||
|
||||
ENV_LOADED=$(grep '^ENV_LOADED=' ./offchain-node-data/env_file | cut -d '=' -f 2)
|
||||
if [ "$ENV_LOADED" = "false" ]; then
|
||||
json_content=$(cat ./config.json)
|
||||
stringified_json=$(echo "$json_content" | jq -c .)
|
||||
|
||||
docker run -it --entrypoint=bash -v $(pwd)/offchain-node-data:/data -e NAME="${nodeName}" -e ALLORA_OFFCHAIN_NODE_CONFIG_JSON="${stringified_json}" alloranetwork/allora-chain:latest -c "bash /data/scripts/init.sh"
|
||||
echo "config.json saved to ./offchain-node-data/env_file"
|
||||
else
|
||||
echo "config.json is already loaded, skipping the operation. You can set ENV_LOADED variable to false in ./offchain-node-data/env_file to reload the config.json"
|
||||
fi
|
@ -1,4 +0,0 @@
|
||||
ALLORA_OFFCHAIN_NODE_CONFIG_JSON='{"wallet":{"addressKeyName":"basic-coin-prediction-offchain-node","addressRestoreMnemonic":"rich note fetch lava bless snake delay theme era anger ritual sea pluck neck hazard dish talk ranch trophy clap fancy human divide gun","addressAccountPassphrase":"secret","alloraHomeDir":"","gas":"1000000","gasAdjustment":1,"nodeRpc":"https://allora-rpc.devnet.behindthecurtain.xyz","maxRetries":1,"minDelay":1,"maxDelay":2,"submitTx":false},"worker":[{"topicId":1,"inferenceEntrypointName":"api-worker-reputer","loopSeconds":5,"parameters":{"InferenceEndpoint":"http://inference:8000/inference/{Token}","Token":"ETH"}}]}'
|
||||
ALLORA_OFFCHAIN_ACCOUNT_ADDRESS=allo14wkkdeg93mdc0sd770z9p4mpjz7w9mysz328um
|
||||
NAME=basic-coin-prediction-offchain-node
|
||||
ENV_LOADED=true
|
Loading…
Reference in New Issue
Block a user