39 Commits

Author SHA1 Message Date
e704020b00 rename 2024-09-15 02:46:46 +03:00
2208920da9 remove balance check 2024-09-12 21:29:50 +03:00
523f4267e8 remove success send check 2024-09-12 17:53:00 +03:00
de5c8415d7 add wait and retries 2024-09-12 05:15:53 +03:00
c749135c63 change playbook 2024-09-12 02:42:50 +03:00
1ee64fd824 add error catch 2024-09-12 00:34:56 +03:00
864b96ea8e update playbook 2024-09-12 00:07:45 +03:00
3c30999c5d disable proxy 2024-09-11 22:31:51 +03:00
8507b6cb06 remove RPC check 2024-09-11 16:19:18 +03:00
69b80d081e change timeout 2024-09-10 03:25:50 +03:00
15163ca3ee change time 2024-09-10 00:43:10 +03:00
a02cd6f001 small fixes 2024-09-09 03:32:45 +03:00
7fdc0e7153 disable unused 2024-09-09 03:20:43 +03:00
1e70fb560b fix typos 2024-09-09 02:51:29 +03:00
e37d7e0dbc fix typo 2024-09-08 23:28:12 +03:00
0dc71205dc add docker speedup 2024-09-08 23:06:58 +03:00
3e35a26af4 add print 2024-09-08 23:02:48 +03:00
1178063983 add topic 2024-09-08 20:50:13 +03:00
265ecaf19a playbook change 2024-09-08 19:35:35 +03:00
0c4e9184f1 fix lint errors 2024-09-08 17:02:34 +03:00
6a8906ee94 add playbook 2024-09-08 16:42:34 +03:00
2333cb12af Update config.json 2024-09-07 20:29:36 +03:00
cdc179b1f4 remove Max Retry Reached 2024-09-07 20:27:29 +03:00
4a615806ca add logs_parser 2024-09-07 16:24:49 +03:00
9ba8bba8f5 new updater 2024-09-06 15:52:51 +03:00
20ae1db9fd update env/cfg 2024-09-06 15:23:36 +03:00
ef4b04b30b Merge pull request #17 from dhairya1008/fix-FromAsCasing-warning
Dockerfile: fix FromAsCasing warning
2024-09-05 18:37:42 +02:00
79d23dd10a Merge branch 'main' into fix-FromAsCasing-warning 2024-09-05 18:28:10 +02:00
820bfcc479 Merge pull request #21 from Radovenchyk/patch-1
Fix typo update README
2024-09-05 18:21:17 +02:00
6af36f417b Merge pull request #22 from Olexandr88/patch-1
Update README
2024-09-05 18:20:38 +02:00
e97d8e0162 Merge pull request #19 from allora-network/kush-alloralabs-patch-1
Update README.md
2024-09-05 18:17:05 +02:00
5807ebbeff Update region in Readme 2024-09-05 18:14:10 +02:00
1f2e49f948 Merge pull request #23 from allora-network/clement/fix-healthcheck
Fix healthcheck missing curl dependency
2024-09-05 09:43:45 -04:00
3b3644b470 Fix healthcheck 2024-09-05 15:41:27 +02:00
233393fc63 Update README.md 2024-09-05 15:41:10 +02:00
f5f3ea8364 Update README 2024-09-05 13:32:20 +03:00
a2af3605c3 Fix typo update README 2024-09-05 13:10:48 +03:00
5cbbeeedc2 Update README.md 2024-09-04 16:33:40 -04:00
4d95ca5fbd Dockerfile: fix FromAsCasing warning
Some recent versions of Docker require the FromAsCasing keyword to be in
uppercase

Signed-off-by: dhairya1899 <17itdhairya.parmar@gmail.com>
2024-08-15 01:18:24 +05:30
11 changed files with 603 additions and 26 deletions

7
.env Normal file
View File

@ -0,0 +1,7 @@
TOKEN=###TOKEN###
TRAINING_DAYS=###TRAINING_DAYS###
TIMEFRAME=###TIMEFRAME###
MODEL=###MODEL###
REGION=EU
DATA_PROVIDER=###DATA_PROVIDER###
CG_API_KEY=###CG_API_KEY###

View File

@ -1,7 +0,0 @@
TOKEN=
TRAINING_DAYS=
TIMEFRAME=
MODEL=
REGION=
DATA_PROVIDER=
CG_API_KEY=

2
.gitignore vendored
View File

@ -8,12 +8,10 @@ logs/*
inference-data inference-data
worker-data worker-data
config.json
/data /data
**/*.venv* **/*.venv*
**/.cache **/.cache
**/.env
**/env_file **/env_file
**/.gitkeep* **/.gitkeep*
**/*.csv **/*.csv

View File

@ -1,4 +1,7 @@
FROM python:3.11-slim as project_env FROM python:3.11-slim AS project_env
# Install curl
RUN apt-get update && apt-get install -y curl
# Set the working directory in the container # Set the working directory in the container
WORKDIR /app WORKDIR /app

View File

@ -1,6 +1,6 @@
# Basic Price Prediction Node # Basic Price Prediction Node
This repository provides an example Allora network worker node, designed to offer price predictions. The primary objective is to demonstrate the use of a basic inference model running within a dedicated container, showcasing its integration with the Allora network infrastructure to contribute valuable inferences. This repository provides an example [Allora network](https://docs.allora.network/) worker node, designed to offer price predictions. The primary objective is to demonstrate the use of a basic inference model running within a dedicated container, showcasing its integration with the Allora network infrastructure to contribute valuable inferences.
## Components ## Components
@ -26,25 +26,25 @@ A complete working example is provided in the `docker-compose.yml` file.
Here are the currently accepted configurations: Here are the currently accepted configurations:
- TOKEN - TOKEN
Must be one in ['ETH','SOL','BTC','BNB','ARB']. Must be one in ('ETH','SOL','BTC','BNB','ARB').
Note: if you are using `Binance` as the data provider, any token could be used. Note: if you are using `Binance` as the data provider, any token could be used.
If you are using Coingecko, you should add its `coin_id` in the [token_map here](https://github.com/allora-network/basic-coin-prediction-node/blob/main/updater.py#L107). Find [more info here](https://docs.coingecko.com/reference/simple-price) and the [list here](https://docs.google.com/spreadsheets/d/1wTTuxXt8n9q7C4NDXqQpI3wpKu1_5bGVmP9Xz0XGSyU/edit?usp=sharing). If you are using Coingecko, you should add its `coin_id` in the [token_map here](https://github.com/allora-network/basic-coin-prediction-node/blob/main/updater.py#L107). Find [more info here](https://docs.coingecko.com/reference/simple-price) and the [list here](https://docs.google.com/spreadsheets/d/1wTTuxXt8n9q7C4NDXqQpI3wpKu1_5bGVmP9Xz0XGSyU/edit?usp=sharing).
- TRAINING_DAYS - TRAINING_DAYS
Must be an `int` >= 1. Must be an `int` >= 1.
Represents how many days of historical data to use. Represents how many days of historical data to use.
- TIMEFRAME - TIMEFRAME
This should be in this form: `10m`, `1h`, `1d`, etc. This should be in this form: `10min`, `1h`, `1d`, `1m`, etc.
Note: For Coingecko, Data granularity (candle's body) is automatic - [see here](https://docs.coingecko.com/reference/coins-id-ohlc). To avoid downsampling, it is recommanded to use with Coingecko: Note: For Coingecko, Data granularity (candle's body) is automatic - [see here](https://docs.coingecko.com/reference/coins-id-ohlc). To avoid downsampling, it is recommanded to use with Coingecko:
- TIMEFRAME >= 30m if TRAINING_DAYS <= 2 - TIMEFRAME >= 30m if TRAINING_DAYS <= 2
- TIMEFRAME >= 4h if TRAINING_DAYS <= 30 - TIMEFRAME >= 4h if TRAINING_DAYS <= 30
- TIMEFRAME >= 4d if TRAINING_DAYS >= 31 - TIMEFRAME >= 4d if TRAINING_DAYS >= 31
- MODEL - MODEL
Must be one in ['LinearRegression','SVR','KernelRidge','BayesianRidge']. Must be one in ('LinearRegression','SVR','KernelRidge','BayesianRidge').
You can easily add support for any other models by [adding it here](https://github.com/allora-network/basic-coin-prediction-node/blob/main/model.py#L133). You can easily add support for any other models by [adding it here](https://github.com/allora-network/basic-coin-prediction-node/blob/main/model.py#L133).
- REGION - REGION
Must be `EU` or `US` - it is used for the Binance API. Used for the Binance API. This should be in this form: `US`, `EU`, etc.
- DATA_PROVIDER - DATA_PROVIDER
Must be `Binance` or `Coingecko`. Feel free to add support for other data providers to personalize your model! Must be `binance` or `coingecko`. Feel free to add support for other data providers to personalize your model!
- CG_API_KEY - CG_API_KEY
This is your `Coingecko` API key, if you've set `DATA_PROVIDER=coingecko`. This is your `Coingecko` API key, if you've set `DATA_PROVIDER=coingecko`.

View File

@ -1,23 +1,23 @@
{ {
"wallet": { "wallet": {
"addressKeyName": "test", "addressKeyName": "###WALLET###",
"addressRestoreMnemonic": "", "addressRestoreMnemonic": "###MNEMONIC###",
"alloraHomeDir": "", "alloraHomeDir": "",
"gas": "auto", "gas": "auto",
"gasAdjustment": 1.5, "gasAdjustment": 1.5,
"nodeRpc": "https://allora-rpc.testnet.allora.network", "nodeRpc": "###RPC_URL###",
"maxRetries": 1, "maxRetries": 10,
"delay": 1, "delay": 20,
"submitTx": true "submitTx": true
}, },
"worker": [ "worker": [
{ {
"topicId": 1, "topicId": ###TOPIC###,
"inferenceEntrypointName": "api-worker-reputer", "inferenceEntrypointName": "api-worker-reputer",
"loopSeconds": 5, "loopSeconds": 5,
"parameters": { "parameters": {
"InferenceEndpoint": "http://inference:8000/inference/{Token}", "InferenceEndpoint": "http://inference:8000/inference/{Token}",
"Token": "ETH" "Token": "###TOKEN###"
} }
} }
] ]

View File

@ -1,6 +1,6 @@
services: services:
inference: inference:
container_name: inference-basic-eth-pred container_name: inference
env_file: env_file:
- .env - .env
build: . build: .
@ -8,7 +8,7 @@ services:
ports: ports:
- "8000:8000" - "8000:8000"
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/inference/${TOKEN}"] test: ["CMD", "curl", "-f", "http://inference:8000/inference/${TOKEN}"]
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 12 retries: 12
@ -16,7 +16,7 @@ services:
- ./inference-data:/app/data - ./inference-data:/app/data
updater: updater:
container_name: updater-basic-eth-pred container_name: updater
build: . build: .
environment: environment:
- INFERENCE_API_ADDRESS=http://inference:8000 - INFERENCE_API_ADDRESS=http://inference:8000

79
logs_parser.py Normal file
View File

@ -0,0 +1,79 @@
import subprocess
import json
import sys
import time
def is_json(myjson):
try:
json_object = json.loads(myjson)
except ValueError as e:
return False
return True
def parse_logs(timeout):
start_time = time.time()
while True:
unsuccessful_attempts = 0
current_retry = 0
max_retry = 0
print("Requesting Docker logs...", flush=True)
process = subprocess.Popen(
["docker", "logs", "worker"],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True
)
try:
stdout, stderr = process.communicate()
if stderr:
print(f"Error: {stderr.strip()}", flush=True)
for line in stdout.splitlines():
#print(f"Line: {line}", flush=True)
if is_json(line):
data = json.loads(line)
if data.get("level") == "info" or data.get("level") == "error":
print(f"{data['message']}", flush=True)
if data.get("msg") == "Send Worker Data to chain" and data.get("message") == "Success":
print(f"Success: {data}", flush=True)
return True, f"Success after {unsuccessful_attempts} unsuccessful attempts, with current retry {current_retry} out of {max_retry}"
elif data.get("msg") == "Send Worker Data to chain" and "Failed, retrying..." in data.get("message", ""):
unsuccessful_attempts += 1
retry_info = data["message"].split("Retry ")[1].strip("()")
current_retry, max_retry = map(int, retry_info.split("/"))
if current_retry == max_retry:
print(f"Max Retry Reached: {data}", flush=True)
return False, "Max Retry Reached"
elif data.get("message") == "Error getting latest open worker nonce on topic":
print(f"Error: {data}", flush=True)
return False, "Error getting latest open worker nonce on topic"
except Exception as e:
print(f"Exception occurred: {e}", flush=True)
print("Sleeping before next log request...", flush=True)
time.sleep(30)
if time.time() - start_time > timeout * 60:
print(f"Timeout reached: {timeout} minutes elapsed without success.", flush=True)
return False, f"Timeout reached: {timeout} minutes elapsed without success."
return False, "No Success"
if __name__ == "__main__":
print("Parsing logs...")
if len(sys.argv) > 1:
timeout = eval(sys.argv[1])
else:
timeout = 30
result = parse_logs(timeout)
print(result[1])
if result[0] == False:
print("Exiting 1...")
sys.exit(1)
else:
print("Exiting 0...")
sys.exit(0)

View File

@ -114,11 +114,18 @@ def train_model(timeframe):
price_data = pd.read_csv(training_price_data_path) price_data = pd.read_csv(training_price_data_path)
df = load_frame(price_data, timeframe) df = load_frame(price_data, timeframe)
if df.empty:
raise ValueError("No data available after loading and formatting. Check the data source or timeframe.")
print(df.tail()) print(df.tail())
y_train = df['close'].shift(-1).dropna().values y_train = df['close'].shift(-1).dropna().values
X_train = df[:-1] X_train = df[:-1]
if X_train.empty or len(y_train) == 0:
raise ValueError("Training data is empty. Ensure there is enough data for training.")
print(f"Training data shape: {X_train.shape}, {y_train.shape}") print(f"Training data shape: {X_train.shape}, {y_train.shape}")
# Define the model # Define the model

470
playbook.yml Normal file
View File

@ -0,0 +1,470 @@
- name: Allora deployment playbook
hosts: all
become: true
vars:
ansible_python_interpreter: /usr/bin/python3.11
ipfs_url: https://bafybeigpiwl3o73zvvl6dxdqu7zqcub5mhg65jiky2xqb4rdhfmikswzqm.ipfs.w3s.link/manifest.json
tasks:
- name: Append command to .bash_history
ansible.builtin.blockinfile:
path: "~/.bash_history"
create: true
block: |
#1724983098
cd basic-coin-prediction-node/ ; docker compose logs -f
#1724983099
docker logs worker -f
cd basic-coin-prediction-node/ ; docker compose up
marker: ""
mode: '0644'
- name: Set locale to C.UTF-8
ansible.builtin.command:
cmd: localectl set-locale LANG=C.UTF-8
changed_when: false
- name: Create APT configuration file to assume yes
ansible.builtin.copy:
dest: /etc/apt/apt.conf.d/90forceyes
content: |
APT::Get::Assume-Yes "true";
mode: '0644'
- name: Update /etc/bash.bashrc
ansible.builtin.blockinfile:
path: /etc/bash.bashrc
block: |
export HISTTIMEFORMAT='%F, %T '
export HISTSIZE=10000
export HISTFILESIZE=10000
shopt -s histappend
export PROMPT_COMMAND='history -a'
export HISTCONTROL=ignoredups
export LANG=C.UTF-8
export LC_ALL=C.UTF-8
alias ls='ls --color=auto'
shopt -s cmdhist
- name: Ensure ~/.inputrc exists
ansible.builtin.file:
path: /root/.inputrc
state: touch
mode: '0644'
- name: Update ~/.inputrc
ansible.builtin.blockinfile:
path: ~/.inputrc
block: |
"\e[A": history-search-backward
"\e[B": history-search-forward
- name: Ensure ~/.nanorc exists
ansible.builtin.file:
path: /root/.nanorc
state: touch
mode: '0644'
- name: Update ~/.nanorc
ansible.builtin.blockinfile:
path: ~/.nanorc
block: |
set nohelp
set tabsize 4
set tabstospaces
set autoindent
set positionlog
set backup
set backupdir /tmp/
set locking
include /usr/share/nano/*.nanorc
- name: Set hostname
ansible.builtin.shell: |
hostnamectl set-hostname {{ serverid }}
echo "127.0.1.1 {{ serverid }}" >> /etc/hosts
changed_when: false
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
register: apt_update_result
retries: 5
delay: 50
until: apt_update_result is succeeded
- name: Upgrade packages
ansible.builtin.apt:
upgrade: dist
force_apt_get: true
autoremove: true
register: apt_upgrade_result
retries: 5
delay: 50
until: apt_upgrade_result is succeeded
# - name: Install packages
# ansible.builtin.apt:
# name:
# - ca-certificates
# - zlib1g-dev
# - libncurses5-dev
# - libgdbm-dev
# - libnss3-dev
# - curl
# - jq
# - git
# - zip
# - wget
# - make
# - python3
# - python3-pip
# - iftop
# state: present
# update_cache: true
# async: "{{ 60 * 20 }}"
# poll: 30
# - name: Check no-proxy ipfs access
# ansible.builtin.shell: |
# curl -s -w "%{http_code}" -o response.json {{ ipfs_url }}
# register: noproxy_check
# changed_when: false
# failed_when: noproxy_check.stdout != "200"
#
# - name: Check proxy ipfs access
# ansible.builtin.shell: |
# curl -s -w "%{http_code}" -o response.json -x {{ proxy }} {{ ipfs_url }}
# register: proxy_check
# changed_when: false
# failed_when: proxy_check.stdout != "200"
# - name: Install Docker
# ansible.builtin.shell: curl -fsSL https://get.docker.com | bash
# changed_when: false
# async: "{{ 60 * 5 }}"
# poll: 30
# - name: Update Docker daemon journald logging
# ansible.builtin.copy:
# dest: /etc/docker/daemon.json
# content: |
# {
# "log-driver": "journald"
# }
# mode: '0644'
#
# - name: Restart Docker
# ansible.builtin.service:
# name: docker
# state: restarted
#
# - name: Update journald log SystemMaxUse=2G configuration
# ansible.builtin.lineinfile:
# path: /etc/systemd/journald.conf
# line: 'SystemMaxUse=2G'
# insertafter: EOF
# create: true
# mode: '0644'
#
# - name: Restart journald
# ansible.builtin.service:
# name: systemd-journald
# state: restarted
- name: Docker login
ansible.builtin.shell: docker login -u "{{ docker_username }}" -p "{{ docker_password }}"
register: docker_login_result
changed_when: false
failed_when: "'Login Succeeded' not in docker_login_result.stdout"
- name: Clone repository
ansible.builtin.git:
repo: https://gitea.vvzvlad.xyz/vvzvlad/allora
dest: "{{ ansible_env.HOME }}/basic-coin-prediction-node"
version: "{{ git_version }}"
force: true
async: "{{ 60 * 15 }}"
poll: 30
- name: Update environment variables
ansible.builtin.shell: |
./update.sh WALLET "{{ wallet }}"
./update.sh MNEMONIC "{{ mnemonic }}"
./update.sh RPC_URL "{{ rpc_url }}"
./update.sh TOKEN "{{ token }}"
./update.sh TOPIC "{{ topic }}"
./update.sh TRAINING_DAYS "{{ training_days }}"
./update.sh TIMEFRAME "{{ timeframe }}"
./update.sh MODEL "{{ model }}"
./update.sh DATA_PROVIDER "{{ data_provider }}"
./update.sh CG_API_KEY "{{ cg_api_key }}"
args:
chdir: "{{ ansible_env.HOME }}/basic-coin-prediction-node"
changed_when: false
- name: Init config
ansible.builtin.shell: ./init.config ; true
args:
chdir: "{{ ansible_env.HOME }}/basic-coin-prediction-node"
changed_when: false
- name: Build docker compose
ansible.builtin.command: docker compose build -q
args:
chdir: "{{ ansible_env.HOME }}/basic-coin-prediction-node"
environment:
COMPOSE_INTERACTIVE_NO_CLI: 'true'
changed_when: false
async: "{{ 60 * 45 }}"
poll: "{{ 60 * 5 }}"
# - name: Docker pre-up
# ansible.builtin.command: docker compose up -d
# args:
# chdir: "{{ ansible_env.HOME }}/basic-coin-prediction-node"
# environment:
# COMPOSE_INTERACTIVE_NO_CLI: 'true'
# changed_when: false
# async: "{{ 60 * 80 }}"
# poll: "{{ 60 * 5 }}"
# - name: Check Docker container status
# ansible.builtin.shell: >
# if [ $(docker ps -q | wc -l) -eq $(docker ps -a -q | wc -l) ]; then
# echo "all_running";
# else
# echo "not_all_running";
# fi
# register: container_status
# retries: 10
# delay: 30
# until: container_status.stdout.find("all_running") != -1
#
# - name: Docker stop (pre-up)
# ansible.builtin.command: docker compose stop
# args:
# chdir: "{{ ansible_env.HOME }}/basic-coin-prediction-node"
# environment:
# COMPOSE_INTERACTIVE_NO_CLI: 'true'
# changed_when: false
#
# - name: Check external IP before
# ansible.builtin.command: curl https://ifconfig.me
# register: ip_before
# changed_when: false
#
# - name: Validate IP address
# ansible.builtin.assert:
# that:
# - ip_before.stdout | ansible.utils.ipaddr
# fail_msg: "The returned value is not a valid IP address."
# success_msg: "The returned value is a valid IP address."
# - name: Download tun2socks
# ansible.builtin.get_url:
# url: https://github.com/xjasonlyu/tun2socks/releases/download/v2.5.2/tun2socks-linux-amd64.zip
# dest: /tmp/tun2socks-linux-amd64.zip
# mode: '0644'
# async: "{{ 60 * 5 }}"
# poll: 30
#
# - name: Unzip tun2socks
# ansible.builtin.unarchive:
# src: /tmp/tun2socks-linux-amd64.zip
# dest: /usr/local/sbin/
# remote_src: true
# mode: '0755'
#
# - name: Create proxy file
# ansible.builtin.copy:
# content: "{{ proxy }}"
# dest: /root/proxy
# mode: '0644'
#
# - name: Create tun2socks systemd service
# ansible.builtin.copy:
# dest: /etc/systemd/system/tun2socks.service
# content: |
# [Unit]
# Description=Tun2Socks gateway
# After=network.target
# Wants=network.target
#
# [Service]
# User=root
# Type=simple
# RemainAfterExit=true
# ExecStartPre=/bin/sh -c 'ip route add $(cat /root/proxy | grep -oP "(?<=@)[0-9.]+(?=:)" )/32 via $(ip route | grep -oP "(?<=default via )[0-9.]+")'
# ExecStart=/bin/sh -c '/usr/local/sbin/tun2socks-linux-amd64 --device tun0 --proxy $(cat /root/proxy)'
# ExecStopPost=/bin/sh -c 'ip route del $(cat /root/proxy | grep -oP "(?<=@)[0-9.]+(?=:)" )/32 via $(ip route | grep -oP "(?<=default via )[0-9.]+")'
# Restart=always
#
# [Install]
# WantedBy=multi-user.target
# mode: '0644'
#
# - name: Create network configuration for tun0
# ansible.builtin.copy:
# dest: /etc/systemd/network/10-proxy.network
# content: |
# [Match]
# Name=tun0
#
# [Network]
# Address=10.20.30.1/24
#
# [Route]
# Gateway=0.0.0.0
# mode: '0644'
#
# - name: Enable and start tun2socks service
# ansible.builtin.systemd:
# name: tun2socks
# enabled: true
# state: started
#
# - name: Reload network configuration
# ansible.builtin.command: networkctl reload
# changed_when: false
#
# - name: Restart tun2socks service
# ansible.builtin.systemd:
# name: tun2socks
# state: restarted
- name: Check RPC availability
ansible.builtin.uri:
url: "{{ rpc_url }}/health?"
method: GET
return_content: true
timeout: 30
register: rpc_url_response
retries: 3
delay: 120
failed_when:
- rpc_url_response.status != 200
- rpc_url_response.json is not none and rpc_url_response.json is not defined
- name: Check Binance URL availability
ansible.builtin.uri:
url: "https://api.binance.com/api/v3/klines?symbol=BTCUSDT&interval=1M&limit=1"
method: GET
return_content: true
register: binance_url_response
retries: 3
delay: 60
failed_when:
- binance_url_response.status != 200
- binance_url_response.json is not none and binance_url_response.json is not defined
# - name: Get balance for the wallet
# retries: 3
# delay: 30
# ansible.builtin.shell: |
# response=$(curl --silent --location --request GET "https://allora-api.testnet.allora.network/cosmos/bank/v1beta1/balances/{{ wallet }}") && \
# echo "$response" && \
# uallo_balance=$(echo "$response" | jq -r '.balances[] | select(.denom == "uallo") | .amount // 0') && \
# echo "uallo_balance: $uallo_balance" && \
# if [ "$uallo_balance" -gt 100000 ]; then
# echo "Balance {{ wallet }} > 100000"
# else
# echo "Balance {{ wallet }} < 100000"
# exit 1
# fi
# register: wallet_balance_check
# failed_when: wallet_balance_check.rc != 0
# - name: Check external IP after
# ansible.builtin.command: curl https://ifconfig.me
# register: ip_after
# changed_when: false
#
# - name: Validate IP address
# ansible.builtin.assert:
# that:
# - ip_after.stdout | ansible.utils.ipaddr
# fail_msg: "The returned value is not a valid IP address."
# success_msg: "The returned value is a valid IP address."
#
# - name: Show IPs
# ansible.builtin.debug:
# msg: "External IP before: {{ ip_before.stdout }}, External IP after: {{ ip_after.stdout }}"
#
# - name: Compare external IPs
# ansible.builtin.fail:
# msg: "External IP before and after should not be the same"
# when: ip_before.stdout == ip_after.stdout
- name: Docker up
ansible.builtin.command: docker compose up -d
args:
chdir: "{{ ansible_env.HOME }}/basic-coin-prediction-node"
environment:
COMPOSE_INTERACTIVE_NO_CLI: 'true'
changed_when: false
async: "{{ 60 * 80 }}"
poll: "{{ 60 * 5 }}"
- name: Check Docker containers status
ansible.builtin.shell: >
if [ $(docker ps -q | wc -l) -eq $(docker ps -a -q | wc -l) ]; then
echo "all_running";
else
echo "not_all_running";
fi
register: container_status
retries: 10
delay: 30
until: container_status.stdout.find("all_running") != -1
- name: Check "not have enough balance"
ansible.builtin.command: docker logs {{ item }} 2>&1
register: docker_logs_check
changed_when: false
failed_when: '"not have enough balance" in docker_logs_check.stdout'
with_items:
- worker
- worker-1
- worker-2
- name: Check updater endpoint
ansible.builtin.shell: |
response=$(curl --silent --location --request GET http://localhost:8000/update) && \
if [ "$response" != "0" ]; then
echo "Updater endpoint check failed: $response != 0"
exit 1
fi
register: updater_shell_response
retries: 2
delay: 60
until: updater_shell_response.rc == 0
changed_when: false
- name: Check inference endpoint
ansible.builtin.shell: |
response=$(curl --silent --location --request GET http://localhost:8000/inference/{{ token }}) && \
status=$(curl -o /dev/null -s -w "%{http_code}\\n" http://localhost:8000/inference/{{ token }}) && \
if [ "$status" -ne 200 ] || ! echo "$response" | grep -qE '^[0-9]+(\.[0-9]+)?$'; then
echo "Inference endpoint check failed: status $status, response $response"
exit 1
fi
register: inference_shell_response
retries: 2
delay: 60
failed_when: inference_shell_response.rc != 0
changed_when: false
# - name: Wait success send
# ansible.builtin.shell: |
# python3 logs_parser.py 80
# args:
# chdir: "{{ ansible_env.HOME }}/basic-coin-prediction-node"
# register: docker_logs_check
# changed_when: false
# failed_when: docker_logs_check.rc != 0
- name: Remove docker login credentials
ansible.builtin.file:
path: /root/.docker/config.json
state: absent

20
update.sh Executable file
View File

@ -0,0 +1,20 @@
#!/usr/bin/env bash
if [ "$#" -ne 2 ]; then
echo "Usage: $0 <PARAMETER> <NEW_VALUE>"
exit 1
fi
PARAMETER=$1
NEW_VALUE=$2
# Список файлов
FILES=(
"./config.json"
".env"
)
for FILE in "${FILES[@]}"; do
EXPANDED_FILE=$(eval echo "$FILE")
sed -i "s|###$PARAMETER###|$NEW_VALUE|g" "$EXPANDED_FILE"
done