Merge pull request #11 from allora-network/remove-b7s
Remove b7s and add support for offchain-node
This commit is contained in:
commit
8ddfacdc5a
14
.gitignore
vendored
14
.gitignore
vendored
@ -1,12 +1,12 @@
|
||||
.DS_Store
|
||||
__pycache__
|
||||
*.pyc
|
||||
.lake_cache/*
|
||||
logs/*
|
||||
.env
|
||||
keys
|
||||
data
|
||||
|
||||
.allorad
|
||||
.cache
|
||||
inference-data
|
||||
worker-data
|
||||
head-data
|
||||
lib
|
||||
|
||||
config.json
|
||||
env
|
||||
env_file
|
@ -1,7 +0,0 @@
|
||||
FROM alloranetwork/allora-inference-base:latest
|
||||
|
||||
USER root
|
||||
RUN pip install requests
|
||||
|
||||
USER appuser
|
||||
COPY main.py /app/
|
183
README.md
183
README.md
@ -1,150 +1,81 @@
|
||||
# Basic ETH price prediction node
|
||||
# Basic ETH Price Prediction Node
|
||||
|
||||
Example Allora network worker node: a node to provide price predictions of ETH.
|
||||
This repository provides an example Allora network worker node, designed to offer price predictions for ETH. The primary objective is to demonstrate the use of a basic inference model running within a dedicated container, showcasing its integration with the Allora network infrastructure to contribute valuable inferences.
|
||||
|
||||
One of the primary objectives is to demonstrate the utilization of a basic inference model operating within a dedicated container. The purpose is to showcase its seamless integration with the Allora network infrastructure, enabling it to contribute with valuable inferences.
|
||||
## Components
|
||||
|
||||
### Components
|
||||
- **Worker**: The node that publishes inferences to the Allora chain.
|
||||
- **Inference**: A container that conducts inferences, maintains the model state, and responds to internal inference requests via a Flask application. This node operates with a basic linear regression model for price predictions.
|
||||
- **Updater**: A cron-like container designed to update the inference node's data by daily fetching the latest market information from Binance, ensuring the model stays current with new market trends.
|
||||
|
||||
* **Head**: An Allora network head node. This is not required for running your node in the Allora network, but it will help for testing your node emulating a network.
|
||||
* **Worker**: The node that will respond to inference requests from the Allora network heads.
|
||||
* **Inference**: A container that conducts inferences, maintains the model state, and responds to internal inference requests via a Flask application. The node operates with a basic linear regression model for price predictions.
|
||||
* **Updater**: An example of a cron-like container designed to update the inference node's data by daily fetching the latest market information from Binance, ensuring the model is kept current with new market trends.
|
||||
Check the `docker-compose.yml` file for the detailed setup of each component.
|
||||
|
||||
Check the `docker-compose.yml` file to see the separate components.
|
||||
## Docker-Compose Setup
|
||||
|
||||
### Inference request flow
|
||||
A complete working example is provided in the `docker-compose.yml` file.
|
||||
|
||||
When a request is made to the head, it relays this request to several workers associated with this head. The request specifies a function to run which will execute a wasm code that will call the `main.py` file in the worker. The worker will check the argument (the coin to predict for), make a request to the `inference` node, and return this value to the `head`, which prepares the response from all of its nodes and sends it back to the requestor.
|
||||
### Steps to Setup
|
||||
|
||||
# Docker Setup
|
||||
1. **Clone the Repository**
|
||||
2. **Copy and Populate Configuration**
|
||||
|
||||
- head and worker nodes are built upon `Dockerfile_b7s` file. This file is functional but simple, so you may want to change it to fit your needs, if you attempt to expand upon the current setup.
|
||||
For further details, please check the base repo [allora-inference-base](https://github.com/allora-network/allora-inference-base).
|
||||
- inference and updater nodes are built with `Dockerfile`. This works as an example of how to reuse your current model containers, just by setting up a Flask web application in front with minimal integration work with the Allora network nodes.
|
||||
|
||||
### Application path
|
||||
|
||||
By default, the application runtime lives under `/app`, as well as the Python code the worker provides (`/app/main.py`). The current user needs to have write permissions on `/app/runtime`.
|
||||
|
||||
### Data volume and permissions
|
||||
|
||||
It is recommended to mount the `/worker-data` and `/head-data` folders as volumes, to persist the node databases of peers, functions, etc. which are defined in the flags passed to the worker.
|
||||
You can create two different `/data` volumes. It is suggested to use `worker-data` for the worker, `head-data` for the head:
|
||||
`mkdir worker-data && mkdir heaed-data`.
|
||||
|
||||
Troubleshooting: A conflict may happen between the uid/gid of the user inside the container(1001) with the permissions of your own user.
|
||||
To make the container user have permissions to write on the `/data` volume, you may need to set the UID/GID from the user running the container. You can get those in linux/osx via `id -u` and `id -g`.
|
||||
The current `docker-compose.yml` file shows the `worker` service setting UID and GID. As well, the `Dockerfile` also sets UID/GID values.
|
||||
|
||||
|
||||
# Docker-Compose Setup
|
||||
A full working example is provided in the `docker-compose.yml` file.
|
||||
|
||||
1. **Generate keys**: Create a set of keys for your head and worker nodes. These keys will be used in the configuration of the head and worker nodes.
|
||||
|
||||
**Create head keys:**
|
||||
```
|
||||
docker run -it --entrypoint=bash -v ./head-data:/data alloranetwork/allora-inference-base:latest -c "mkdir -p /data/keys && (cd /data/keys && allora-keys)"
|
||||
Copy the example configuration file and populate it with your variables:
|
||||
```sh
|
||||
cp config.example.json config.json
|
||||
```
|
||||
|
||||
**Create worker keys**
|
||||
3. **Initialize Worker**
|
||||
|
||||
Run the following commands from the project's root directory to initialize the worker:
|
||||
```sh
|
||||
chmod +x init.docker
|
||||
./init.docker
|
||||
```
|
||||
docker run -it --entrypoint=bash -v ./worker-data:/data alloranetwork/allora-inference-base:latest -c "mkdir -p /data/keys && (cd /data/keys && allora-keys)"
|
||||
These commands will:
|
||||
- Automatically create Allora keys for your worker.
|
||||
- Export the needed variables from the created account to be used by the worker node, bundle them with your provided `config.json`, and pass them to the node as environment variables.
|
||||
|
||||
4. **Faucet Your Worker Node**
|
||||
|
||||
You can find the offchain worker node's address in `./worker-data/env_file` under `ALLORA_OFFCHAIN_ACCOUNT_ADDRESS`. [Add faucet funds](https://docs.allora.network/devs/get-started/setup-wallet#add-faucet-funds) to your worker's wallet before starting it.
|
||||
|
||||
5. **Start the Services**
|
||||
|
||||
Run the following command to start the worker node, inference, and updater nodes:
|
||||
```sh
|
||||
docker compose up --build
|
||||
```
|
||||
To confirm that the worker successfully sends the inferences to the chain, look for the following log:
|
||||
```
|
||||
{"level":"debug","msg":"Send Worker Data to chain","txHash":<tx-hash>,"time":<timestamp>,"message":"Success"}
|
||||
```
|
||||
|
||||
Important note: If no keys are specified in the volumes, new keys will be automatically created inside `head-data/keys` and `worker-data/keys` when first running step 3.
|
||||
## Testing Inference Only
|
||||
|
||||
2. **Connect the worker node to the head node**:
|
||||
This setup allows you to develop your model without the need to bring up the head and worker. To test the inference model only:
|
||||
|
||||
At this step, both worker and head nodes identities are generated inside `head-data/keys` and `worker-data/keys`.
|
||||
To instruct the worker node to connect to the head node:
|
||||
- run `cat head-data/keys/identity` to extract the head node's peer_id specified in the `head-data/keys/identity`
|
||||
- use the printed peer_id to replace the `{HEAD-ID}` placeholder value specified inside the docker-compose.yml file when running the worker service: `--boot-nodes=/ip4/172.22.0.100/tcp/9010/p2p/{HEAD-ID}`
|
||||
|
||||
3. **Run setup**
|
||||
Once all the above is set up, run `docker compose up --build`
|
||||
This will bring up the head, the worker and the inference nodes (which will run an initial update). The `updater` node is a companion for updating the inference node state and it's meant to hit the /update endpoint on the inference service. It is expected to run periodically, being crucial for maintaining the accuracy of the inferences.
|
||||
|
||||
## Testing docker-compose setup
|
||||
|
||||
The head node has the only open port and responds to requests in port 6000.
|
||||
|
||||
Example request:
|
||||
```
|
||||
curl --location 'http://127.0.0.1:6000/api/v1/functions/execute' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"function_id": "bafybeigpiwl3o73zvvl6dxdqu7zqcub5mhg65jiky2xqb4rdhfmikswzqm",
|
||||
"method": "allora-inference-function.wasm",
|
||||
"parameters": null,
|
||||
"topic": "1",
|
||||
"config": {
|
||||
"env_vars": [
|
||||
{
|
||||
"name": "BLS_REQUEST_PATH",
|
||||
"value": "/api"
|
||||
},
|
||||
{
|
||||
"name": "ALLORA_ARG_PARAMS",
|
||||
"value": "ETH"
|
||||
}
|
||||
],
|
||||
"number_of_nodes": -1,
|
||||
"timeout": 5
|
||||
}
|
||||
}'
|
||||
```
|
||||
Response:
|
||||
```
|
||||
{
|
||||
"code": "200",
|
||||
"request_id": "14be2a82-432c-4bae-bc1a-20c7627e0ebc",
|
||||
"results": [
|
||||
{
|
||||
"result": {
|
||||
"stdout": "{\"infererValue\": \"2946.450220116334\"}\n\n",
|
||||
"stderr": "",
|
||||
"exit_code": 0
|
||||
},
|
||||
"peers": [
|
||||
"12D3KooWGHYZAR5YBgJHvG8o8GxBJpV5ANLUfL1UReX5Lizg5iKf"
|
||||
],
|
||||
"frequency": 100
|
||||
}
|
||||
],
|
||||
"cluster": {
|
||||
"peers": [
|
||||
"12D3KooWGHYZAR5YBgJHvG8o8GxBJpV5ANLUfL1UReX5Lizg5iKf"
|
||||
]
|
||||
}
|
||||
}
|
||||
1. Run the following command to start the inference node:
|
||||
```sh
|
||||
docker compose up --build inference
|
||||
```
|
||||
Wait for the initial data load.
|
||||
|
||||
## Testing inference only
|
||||
This setup allows to develop your model without the need for bringing up the head and worker.
|
||||
To only test the inference model, you can just:
|
||||
- Run `docker compose up --build inference` and wait for the initial data load.
|
||||
- Requests can now be sent, e.g. request ETH price inferences as in:
|
||||
2. Send requests to the inference model. For example, request ETH price inferences:
|
||||
|
||||
```sh
|
||||
curl http://127.0.0.1:8000/inference/ETH
|
||||
```
|
||||
$ curl http://127.0.0.1:8000/inference/ETH
|
||||
Expected response:
|
||||
```json
|
||||
{"value":"2564.021586281073"}
|
||||
```
|
||||
or update the node's internal state (download pricing data, train and update the model):
|
||||
|
||||
3. Update the node's internal state (download pricing data, train, and update the model):
|
||||
|
||||
```sh
|
||||
curl http://127.0.0.1:8000/update
|
||||
```
|
||||
$ curl http://127.0.0.1:8000/update
|
||||
Expected response:
|
||||
```sh
|
||||
0
|
||||
```
|
||||
|
||||
## Connecting to the Allora network
|
||||
To connect to the Allora network to provide inferences, both the head and the worker need to register against it. More details on [allora-inference-base](https://github.com/allora-network/allora-inference-base) repo.
|
||||
The following optional flags are used in the `command:` section of the `docker-compose.yml` file to define the connectivity with the Allora network.
|
||||
|
||||
```
|
||||
--allora-chain-key-name=index-provider # your local key name in your keyring
|
||||
--allora-chain-restore-mnemonic='pet sock excess ...' # your node's Allora address mnemonic
|
||||
--allora-node-rpc-address= # RPC address of a node in the chain
|
||||
--allora-chain-topic-id= # The topic id from the chain that you want to provide predictions for
|
||||
```
|
||||
For the nodes to register with the chain, a funded address is needed first.
|
||||
If these flags are not provided, the nodes will not register to the appchain and will not attempt to connect to the appchain.
|
||||
|
24
config.example.json
Normal file
24
config.example.json
Normal file
@ -0,0 +1,24 @@
|
||||
{
|
||||
"wallet": {
|
||||
"addressKeyName": "test",
|
||||
"addressRestoreMnemonic": "",
|
||||
"alloraHomeDir": "",
|
||||
"gas": "1000000",
|
||||
"gasAdjustment": 1.0,
|
||||
"nodeRpc": "http://localhost:26657",
|
||||
"maxRetries": 1,
|
||||
"delay": 1,
|
||||
"submitTx": false
|
||||
},
|
||||
"worker": [
|
||||
{
|
||||
"topicId": 1,
|
||||
"inferenceEntrypointName": "api-worker-reputer",
|
||||
"loopSeconds": 5,
|
||||
"parameters": {
|
||||
"InferenceEndpoint": "http://source:8000/inference/{Token}",
|
||||
"Token": "ETH"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
@ -1,16 +1,10 @@
|
||||
services:
|
||||
inference:
|
||||
container_name: inference-basic-eth-pred
|
||||
build:
|
||||
context: .
|
||||
build: .
|
||||
command: python -u /app/app.py
|
||||
ports:
|
||||
- "8000:8000"
|
||||
networks:
|
||||
eth-model-local:
|
||||
aliases:
|
||||
- inference
|
||||
ipv4_address: 172.22.0.4
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8000/inference/ETH"]
|
||||
interval: 10s
|
||||
@ -34,87 +28,18 @@ services:
|
||||
depends_on:
|
||||
inference:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
eth-model-local:
|
||||
aliases:
|
||||
- updater
|
||||
ipv4_address: 172.22.0.5
|
||||
|
||||
head:
|
||||
container_name: head-basic-eth-pred
|
||||
image: alloranetwork/allora-inference-base-head:latest
|
||||
environment:
|
||||
- HOME=/data
|
||||
entrypoint:
|
||||
- "/bin/bash"
|
||||
- "-c"
|
||||
- |
|
||||
if [ ! -f /data/keys/priv.bin ]; then
|
||||
echo "Generating new private keys..."
|
||||
mkdir -p /data/keys
|
||||
cd /data/keys
|
||||
allora-keys
|
||||
fi
|
||||
allora-node --role=head --peer-db=/data/peerdb --function-db=/data/function-db \
|
||||
--runtime-path=/app/runtime --runtime-cli=bls-runtime --workspace=/data/workspace \
|
||||
--private-key=/data/keys/priv.bin --log-level=debug --port=9010 --rest-api=:6000
|
||||
ports:
|
||||
- "6000:6000"
|
||||
volumes:
|
||||
- ./head-data:/data
|
||||
working_dir: /data
|
||||
networks:
|
||||
eth-model-local:
|
||||
aliases:
|
||||
- head
|
||||
ipv4_address: 172.22.0.100
|
||||
|
||||
worker:
|
||||
container_name: worker-basic-eth-pred
|
||||
environment:
|
||||
- INFERENCE_API_ADDRESS=http://inference:8000
|
||||
- HOME=/data
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile_b7s
|
||||
entrypoint:
|
||||
- "/bin/bash"
|
||||
- "-c"
|
||||
- |
|
||||
if [ ! -f /data/keys/priv.bin ]; then
|
||||
echo "Generating new private keys..."
|
||||
mkdir -p /data/keys
|
||||
cd /data/keys
|
||||
allora-keys
|
||||
fi
|
||||
# Change boot-nodes below to the key advertised by your head
|
||||
allora-node --role=worker --peer-db=/data/peerdb --function-db=/data/function-db \
|
||||
--runtime-path=/app/runtime --runtime-cli=bls-runtime --workspace=/data/workspace \
|
||||
--private-key=/data/keys/priv.bin --log-level=debug --port=9011 \
|
||||
--boot-nodes=/ip4/172.22.0.100/tcp/9010/p2p/{HEAD-ID} \
|
||||
--topic=allora-topic-1-worker
|
||||
container_name: worker
|
||||
image: alloranetwork/allora-offchain-node:latest
|
||||
volumes:
|
||||
- ./worker-data:/data
|
||||
working_dir: /data
|
||||
depends_on:
|
||||
- inference
|
||||
- head
|
||||
networks:
|
||||
eth-model-local:
|
||||
aliases:
|
||||
- worker
|
||||
ipv4_address: 172.22.0.10
|
||||
|
||||
|
||||
|
||||
networks:
|
||||
eth-model-local:
|
||||
driver: bridge
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.22.0.0/24
|
||||
inference:
|
||||
condition: service_healthy
|
||||
env_file:
|
||||
- ./worker-data/env_file
|
||||
|
||||
volumes:
|
||||
inference-data:
|
||||
worker-data:
|
||||
head-data:
|
||||
|
29
init.docker
Executable file
29
init.docker
Executable file
@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
if [ ! -f config.json ]; then
|
||||
echo "Error: config.json file not found, please provide one"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
nodeName=$(jq -r '.wallet.addressKeyName' config.json)
|
||||
if [ -z "$nodeName" ]; then
|
||||
echo "No name was provided for the node, please provide value for wallet.addressKeyName in the config.json"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f ./worker-data/env_file ]; then
|
||||
echo "ENV_LOADED=false" > ./worker-data/env_file
|
||||
fi
|
||||
|
||||
ENV_LOADED=$(grep '^ENV_LOADED=' ./worker-data/env_file | cut -d '=' -f 2)
|
||||
if [ "$ENV_LOADED" = "false" ]; then
|
||||
json_content=$(cat ./config.json)
|
||||
stringified_json=$(echo "$json_content" | jq -c .)
|
||||
|
||||
docker run -it --entrypoint=bash -v $(pwd)/worker-data:/data -e NAME="${nodeName}" -e ALLORA_OFFCHAIN_NODE_CONFIG_JSON="${stringified_json}" alloranetwork/allora-chain:latest -c "bash /data/scripts/init.sh"
|
||||
echo "config.json saved to ./worker-data/env_file"
|
||||
else
|
||||
echo "config.json is already loaded, skipping the operation. You can set ENV_LOADED variable to false in ./worker-data/env_file to reload the config.json"
|
||||
fi
|
31
main.py
31
main.py
@ -1,31 +0,0 @@
|
||||
import os
|
||||
import requests
|
||||
import sys
|
||||
import json
|
||||
|
||||
INFERENCE_ADDRESS = os.environ["INFERENCE_API_ADDRESS"]
|
||||
|
||||
|
||||
def process(token_name):
|
||||
response = requests.get(f"{INFERENCE_ADDRESS}/inference/{token_name}")
|
||||
content = response.text
|
||||
return content
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Your code logic with the parsed argument goes here
|
||||
try:
|
||||
if len(sys.argv) < 5:
|
||||
value = json.dumps({"error": f"Not enough arguments provided: {len(sys.argv)}, expected 4 arguments: topic_id, blockHeight, blockHeightEval, default_arg"})
|
||||
else:
|
||||
topic_id = sys.argv[1]
|
||||
blockHeight = sys.argv[2]
|
||||
blockHeightEval = sys.argv[3]
|
||||
default_arg = sys.argv[4]
|
||||
|
||||
response_inference = process(token_name=default_arg)
|
||||
response_dict = {"infererValue": response_inference}
|
||||
value = json.dumps(response_dict)
|
||||
except Exception as e:
|
||||
value = json.dumps({"error": {str(e)}})
|
||||
print(value)
|
33
worker-data/scripts/init.sh
Normal file
33
worker-data/scripts/init.sh
Normal file
@ -0,0 +1,33 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
if allorad keys --home=/data/.allorad --keyring-backend test show $NAME > /dev/null 2>&1 ; then
|
||||
echo "allora account: $NAME already imported"
|
||||
else
|
||||
echo "creating allora account: $NAME"
|
||||
output=$(allorad keys add $NAME --home=/data/.allorad --keyring-backend test 2>&1)
|
||||
address=$(echo "$output" | grep 'address:' | sed 's/.*address: //')
|
||||
mnemonic=$(echo "$output" | tail -n 1)
|
||||
|
||||
# Parse and update the JSON string
|
||||
updated_json=$(echo "$ALLORA_OFFCHAIN_NODE_CONFIG_JSON" | jq --arg name "$NAME" --arg mnemonic "$mnemonic" '
|
||||
.wallet.addressKeyName = $name |
|
||||
.wallet.addressRestoreMnemonic = $mnemonic
|
||||
')
|
||||
|
||||
stringified_json=$(echo "$updated_json" | jq -c .)
|
||||
|
||||
echo "ALLORA_OFFCHAIN_NODE_CONFIG_JSON='$stringified_json'" > /data/env_file
|
||||
echo ALLORA_OFFCHAIN_ACCOUNT_ADDRESS=$address >> /data/env_file
|
||||
echo "NAME=$NAME" >> /data/env_file
|
||||
|
||||
echo "Updated ALLORA_OFFCHAIN_NODE_CONFIG_JSON saved to /data/env_file"
|
||||
fi
|
||||
|
||||
|
||||
if grep -q "ENV_LOADED=false" /data/env_file; then
|
||||
sed -i 's/ENV_LOADED=false/ENV_LOADED=true/' /data/env_file
|
||||
else
|
||||
echo "ENV_LOADED=true" >> /data/env_file
|
||||
fi
|
Loading…
Reference in New Issue
Block a user