Compare commits

...

83 Commits

Author SHA1 Message Date
8fe2af1876 Update playbook.yml 2025-05-04 13:07:59 +03:00
vvzvlad
a9ee8e1f80 add ip update 2025-04-20 18:52:55 +03:00
vvzvlad
e7f4bb6a35 Fix status message formatting in check_logs function 2025-04-12 17:02:08 +03:00
vvzvlad
7ce0926d91 Enhance number formatting 2025-04-12 00:22:25 +03:00
vvzvlad
3f49993f6b Update idle detection threshold to 4 hours 2025-04-12 00:11:13 +03:00
vvzvlad
ed2229b139 Fix status message 2025-04-12 00:09:45 +03:00
vvzvlad
79e7e1a89b Refactor status message 2025-04-12 00:09:03 +03:00
vvzvlad
a19b22f2c0 Log enhancement 2025-04-12 00:05:23 +03:00
vvzvlad
25dcd27c69 Refactor check_log 2025-04-12 00:02:37 +03:00
vvzvlad
dcd3a62e3d rename 2025-04-12 00:02:24 +03:00
vvzvlad
de8757d59f Implement conditional reboot 2025-04-11 01:30:02 +03:00
vvzvlad
4e1804bb06 Update state file and grist data file paths in checker.py 2025-04-11 01:20:59 +03:00
vvzvlad
e1af79bac9 enhance health check 2025-04-11 01:07:54 +03:00
20a4e9cfd4 Update projects/hello-world/container/config.json 2025-04-02 03:12:31 +03:00
vvzvlad
a80854253a Update state file path 2025-04-01 16:02:23 +03:00
1b7795f038 Update projects/hello-world/container/config.json 2025-03-31 01:13:27 +03:00
vvzvlad
6495b95c8d Add state management 2025-03-31 01:12:21 +03:00
vvzvlad
63e27f31ac Remove grpc-balancer.py and update deployment configuration
- Deleted grpc-balancer.py as part of the cleanup process, streamlining the application setup.
- Updated playbook.yml to include new commands for managing the grpcbalancer service and SSL certificate.
- Modified docker-compose.yaml to remove the mounting of the SSL certificate, reflecting the updated deployment strategy.
- Adjusted rebuild.sh to change the working directory to ~/node for consistency in project structure.
2025-01-20 23:55:26 +03:00
vvzvlad
64d57407e0 Refactor grpc-balancer to use Gunicorn and update playbook.yml for deployment
- Replaced the SSL server implementation in grpc-balancer.py with Gunicorn for improved performance and scalability.
- Updated playbook.yml to use Gunicorn for starting the grpc-balancer service, including SSL certificate configuration.
- Removed the waitress dependency in favor of Gunicorn, streamlining the application setup.
2025-01-20 19:32:16 +03:00
vvzvlad
e40e14dea5 Add SSL support to grpc-balancer and update playbook for certificate management
- Modified grpc-balancer.py to start an SSL server using certificates.
- Added Ansible tasks in playbook.yml to create and install SSL certificates.
- Updated docker-compose.yaml to mount the SSL certificate into the container.
2025-01-20 19:21:12 +03:00
vvzvlad
b988582553 Refactor grpcbalancer setup in playbook.yml and remove legacy files. Replaced shell commands with Ansible copy module for grpcbalancer service file creation and enabled the service using systemd. Deleted grpc-balancer.py and grpc-balancer.service files as part of the cleanup process, streamlining the deployment configuration. 2025-01-20 18:57:00 +03:00
vvzvlad
22ce6c07c3 Refactor check_logs function in checker.py to improve log output. Moved the subscription ID logging to ensure it is always checked and logged before returning the status, enhancing clarity in the synchronization status reporting. 2025-01-20 13:47:29 +03:00
vvzvlad
834ddb4438 Fix log status message in checker.py to return head_sub_id directly instead of formatted number. This change improves clarity in log output. 2025-01-20 13:33:54 +03:00
vvzvlad
7e8587660d Refactor config.json to enhance snapshot synchronization settings. Moved snapshot_sync parameters to a new location, ensuring clarity and organization. This change maintains the same values for sleep, batch_size, starting_sub_id, and sync_period, improving overall configuration structure. 2025-01-20 13:24:04 +03:00
vvzvlad
6b431823f5 Refactor config.json to enhance snapshot synchronization settings and remove Docker credentials. Updated snapshot_sync parameters: increased sleep time to 3 seconds, adjusted batch_size to 800, changed starting_sub_id to 210000, and extended sync_period to 30 seconds for improved performance. 2025-01-20 12:59:53 +03:00
vvzvlad
82e6047e86 Comment out the apt update and upgrade steps in playbook.yml to prevent unnecessary package updates during execution, streamlining the playbook's operation. 2025-01-20 10:25:43 +03:00
vvzvlad
15f277057e Comment out Docker login and credential removal steps in playbook.yml for security reasons, ensuring sensitive information is not exposed in the playbook. 2025-01-20 09:43:09 +03:00
vvzvlad
04d44aeadf Update playbook.yml to configure Docker daemon for journald logging and add registry mirrors. This change enhances logging capabilities and allows for the use of a custom Docker registry. 2025-01-20 09:42:41 +03:00
vvzvlad
b006ea31b0 Refactor playbook.yml to improve contract deployment and execution handling. Added asynchronous execution with polling for contract deployment and call commands, ensuring successful completion checks. Updated shell command execution to use /bin/bash for consistency. 2025-01-19 19:29:37 +03:00
vvzvlad
04efc25a48 Add Docker pull command for hello-world image in playbook.yml 2025-01-19 18:09:09 +03:00
vvzvlad
cccbc07db1 Update docker-compose.yaml and config.json for improved performance and configuration adjustments
- Bump Docker image version from 1.2.0 to 1.4.0 for enhanced functionality.
- Modify config.json to change trail_head_blocks from 0 to 3, adjust snapshot_sync sleep from 3 to 1.5 seconds, and increase batch_size from 1800 to 10000, while adding sync_period of 10 seconds for better synchronization efficiency.
2025-01-19 17:51:52 +03:00
vvzvlad
7d5889553d Update playbook.yml to reduce RestartSec from 1800 to 600 for improved service responsiveness 2025-01-19 12:19:03 +03:00
vvzvlad
34776214d6 Downgrade Docker image version in docker-compose.yaml from 1.4.0 to 1.2.0 to revert to a previous stable release. 2025-01-19 12:08:08 +03:00
vvzvlad
0cc12c5446 Update Docker image version in docker-compose.yaml from 1.2.0 to 1.4.0 for improved functionality and performance. 2025-01-19 12:00:20 +03:00
vvzvlad
57c8b81c13 Add format_number function to checker.py for improved subscription ID formatting
This update introduces a new `format_number` function that formats subscription IDs into a more readable format (e.g., converting 1000 to '1k'). The `check_logs` function has been modified to utilize this new formatting for both head subscription ID and last subscription ID in the status messages, enhancing clarity in log analysis and improving the overall readability of subscription status reporting.
2025-01-19 11:56:00 +03:00
vvzvlad
382a910856 Enhance check_logs function in checker.py to capture and log head subscription ID. Added logic to extract "head sub id" from logs and return it in the status message, improving clarity in subscription state reporting. This update complements existing functionality for tracking last subscription ID, ensuring comprehensive log analysis. 2025-01-19 11:54:47 +03:00
vvzvlad
a0d2e74115 Update check_logs function in checker.py to modify subscription status reporting. Changed the return message from "Subscription" to "Sync" for improved clarity in log analysis. This adjustment enhances the understanding of the subscription state in the context of log processing. 2025-01-19 11:51:00 +03:00
vvzvlad
c1f23386b5 Refactor check_logs function in checker.py to improve logging and subscription status reporting. Renamed parameter from log_handler to logger for clarity. Added logging for cases where no subscription is found, enhancing the visibility of log analysis outcomes. 2025-01-19 11:42:55 +03:00
vvzvlad
e5a0eef020 Enhance check_logs function in checker.py to capture last subscription ID from logs. Added logic to identify and return the most recent subscription ID when "Ignored subscription creation" messages are detected. This improves the clarity of log analysis by providing specific subscription status, while maintaining the existing error handling for Docker log retrieval. 2025-01-19 11:42:14 +03:00
vvzvlad
c95fce1b69 Add clean_ansi function to checker.py for log processing
This update introduces a new `clean_ansi` function to remove ANSI escape sequences from log output, enhancing readability. The `check_logs` function has been modified to utilize this new function, ensuring that the logs retrieved from Docker are plain text, which improves clarity for subsequent log analysis.
2025-01-19 11:40:43 +03:00
vvzvlad
cd119be631 Update check_logs function in checker.py to suppress color output in docker logs retrieval. This change enhances log readability by ensuring that the output is plain text, improving clarity for subsequent log analysis. 2025-01-19 11:38:25 +03:00
vvzvlad
a5a27829de Refactor check_logs function in checker.py to simplify log handling. Removed subscription message detection logic and replaced it with a basic line printing mechanism. The function now returns a default status of "Idle", streamlining the process and reducing complexity in log analysis. 2025-01-19 11:37:27 +03:00
vvzvlad
246e073092 Update subscription pattern in check_logs function of checker.py to improve error message handling. Enhanced regex to capture error types more accurately, ensuring whitespace is stripped from error messages. This change refines log analysis and improves clarity in health check reporting. 2025-01-19 11:34:57 +03:00
vvzvlad
43dbcd0a17 Refactor check_logs function in checker.py to enhance subscription message handling. Updated regex to capture error types alongside subscription IDs, improving log analysis. Added logging for cases with no matches and refined status messages based on error type, enhancing clarity in health check reporting. 2025-01-19 11:34:16 +03:00
vvzvlad
c062ad8e66 Enhance package management in checker.py by implementing version control for required packages. Added warnings filter to suppress warning messages. The script now checks for specific package versions and installs them accordingly, improving dependency management and ensuring compatibility. This change enhances the robustness of the environment setup for the checker process. 2025-01-19 11:32:51 +03:00
vvzvlad
da56ef57f0 Comment out random sleep functionality in checker.py to simplify execution flow. This change removes unnecessary delays, enhancing the responsiveness of the checker process while maintaining existing logging and error handling mechanisms. 2025-01-19 11:31:51 +03:00
vvzvlad
b89e6c20ce Refactor self_update function in checker.py to accept a logger parameter for improved logging. This change enhances the update process by providing consistent logging messages for update checks, successes, and errors, thereby improving the clarity and maintainability of the script's update functionality. 2025-01-19 11:29:48 +03:00
vvzvlad
b93e5f890a Refactor check_logs function in checker.py to improve subscription message detection. Changed from single match to iterating over all matches, allowing retrieval of the last subscription ID found in logs. Updated logging to reflect the last subscription message or indicate absence, enhancing clarity in log analysis. 2025-01-19 11:11:38 +03:00
vvzvlad
625a8f71ae Update subscription pattern in check_logs function of checker.py to include a wildcard for error messages. This change enhances the regex matching for subscription completion, improving the accuracy of log analysis in health checks. 2025-01-19 09:54:48 +03:00
vvzvlad
8acb4768e3 Reduce random sleep duration in checker.py from 10 minutes to 1 minute to improve responsiveness of the checker process. This change enhances the overall efficiency of the health check execution. 2025-01-19 09:48:54 +03:00
vvzvlad
b9d966ff90 Update check_logs function in checker.py to retrieve logs from the 'infernet-node' container instead of using 'docker compose'. This change improves log retrieval efficiency by reducing the time window from 2 hours to 10 minutes, while also updating error handling to reflect the new command. Enhances clarity in log management for health checks. 2025-01-19 09:45:18 +03:00
vvzvlad
1e38b5aca6 Refactor check_logs function in checker.py to focus on subscription completion detection. Removed dynamic error and proof speed logging, simplifying the health check process. Now returns a status based on subscription completion, enhancing clarity and maintainability of the health check logic. 2025-01-19 09:44:17 +03:00
vvzvlad
3abc4e0a81 Update playbook.yml to add journalctl command for node-checker service and comment out grpcbalancer service installation steps. This change enhances logging capabilities while simplifying the playbook by removing unnecessary service setup commands. 2025-01-19 09:26:27 +03:00
vvzvlad
49c13d9e42 Enhance health check in checker.py by adding a print statement for the fixed message "test ok". This improves visibility of the health status during execution while maintaining the existing error handling structure. 2025-01-18 17:57:29 +03:00
vvzvlad
4e50ee554b Comment out dynamic log data retrieval in checker.py, maintaining a fixed health status callback message "test ok". This change simplifies the health check process while preserving error handling functionality. 2025-01-18 15:03:02 +03:00
vvzvlad
f5706ac887 Update health status callback in checker.py to return a fixed message "test ok" instead of dynamic log data 2025-01-18 14:52:59 +03:00
vvzvlad
da787935df Add node-checker service 2025-01-18 13:59:41 +03:00
vvzvlad
77a5f1a071 Refactor upload_stats_to_grist function in grpc-balancer.py to consolidate server statistics into a single dictionary, improving clarity and maintainability. Update error logging to include exception details. 2025-01-17 07:20:54 +03:00
vvzvlad
8425114abf Refactor update_contracts.sh to remove unnecessary directory change and streamline contract address handling 2025-01-16 09:16:28 +03:00
vvzvlad
92f6cdcd40 Update playbook.yml to remove unnecessary '--no-dependencies' argument from grpcbalancer installation command 2025-01-16 09:05:42 +03:00
vvzvlad
f2cce2f592 Refactor grpcbalancer installation in playbook.yml to use shell commands for copying and setting permissions; update service file path in grpc-balancer.service; remove deprecated grpcbalancer.py file. 2025-01-16 08:51:07 +03:00
vvzvlad
5d8a2cfdd6 Refactor update.sh to simplify usage by removing wallet address, private key, and RPC URL parameters; streamline script for improved clarity and maintainability. 2025-01-16 08:32:17 +03:00
vvzvlad
4c0268823b Add execute permission to update.sh in playbook.yml 2025-01-16 08:26:06 +03:00
vvzvlad
bbd1de0020 Fix variable name in update script call in playbook.yml 2025-01-16 07:56:19 +03:00
vvzvlad
3a2847295a Refactor playbook to use 'node' directory instead of 'ritual'; add grpcbalancer installation steps 2025-01-16 07:48:51 +03:00
vvzvlad
671c7a4507 update v9 2025-01-16 07:31:53 +03:00
vvzvlad
e0913ece08 edit retries/delay 2024-09-28 03:34:11 +03:00
vvzvlad
1a775c6b87 add bashrc 2024-09-28 03:30:42 +03:00
vvzvlad
26323e9c41 revert to 1.2.0 2024-09-26 02:36:31 +03:00
vvzvlad
a79bdc94d0 bump ver 2024-09-25 17:52:04 +03:00
vvzvlad
272ee7522b add cmd in bash history 2024-09-24 14:54:38 +03:00
vvzvlad
e85f0b987e remove unless stopped 2024-09-23 02:42:05 +03:00
vvzvlad
7eab098a99 add logs setting 2024-09-23 02:39:45 +03:00
vvzvlad
c0502193b2 remove unused 2024-09-22 00:50:19 +03:00
vvzvlad
9be7df6bcf add sh files 2024-09-21 23:51:36 +03:00
vvzvlad
8183934d09 add commands to bashhistory 2024-09-21 20:05:18 +03:00
vvzvlad
1558f60810 change settings 2024-09-21 20:05:07 +03:00
vvzvlad
138fcfc321 add bash history 2024-09-21 19:25:29 +03:00
vvzvlad
153ccfd4db fix Install Forge and Infernet SDK 2024-09-18 01:25:24 +03:00
vvzvlad
eaceb3ecfa many fixes 2024-09-18 01:18:25 +03:00
vvzvlad
c0fd0330af fix bug 2024-09-18 01:12:07 +03:00
vvzvlad
fc49f06d75 big update 2024-09-18 01:06:32 +03:00
vvzvlad
17702b1396 add git_version 2024-09-15 03:00:16 +03:00
9 changed files with 630 additions and 141 deletions

374
checker.py Normal file
View File

@ -0,0 +1,374 @@
# flake8: noqa
# pylint: disable=broad-exception-raised, raise-missing-from, too-many-arguments, redefined-outer-name
# pylance: disable=reportMissingImports, reportMissingModuleSource, reportGeneralTypeIssues
# type: ignore
import warnings
warnings.filterwarnings("ignore", category=Warning)
import re
from datetime import datetime, timedelta, timezone
import subprocess
import os
import time
import random
import sys
import pkg_resources
import requests
import json
from collections import deque
required_packages = {
'grist-api': 'latest',
'colorama': 'latest',
'requests': '2.31.0',
'urllib3': '2.0.7',
'charset-normalizer': '3.3.2'
}
installed_packages = {pkg.key: pkg.version for pkg in pkg_resources.working_set}
for package, version in required_packages.items():
if package not in installed_packages or (version != 'latest' and installed_packages[package] != version):
if version == 'latest':
subprocess.check_call([sys.executable, '-m', 'pip', 'install', package, '--break-system-packages'])
else:
subprocess.check_call([sys.executable, '-m', 'pip', 'install', f"{package}=={version}", '--break-system-packages'])
from grist_api import GristDocAPI
import colorama
import logging
import socket
def self_update(logger):
logger.info("Checking for updates..")
script_path = os.path.abspath(__file__)
update_url = "https://gitea.vvzvlad.xyz/vvzvlad/ritual/raw/branch/main-22aug/checker.py"
try:
response = requests.get(update_url, timeout=10)
if response.status_code == 200:
current_content = ""
with open(script_path, 'r', encoding='utf-8') as f:
current_content = f.read()
if current_content != response.text:
with open(script_path, 'w', encoding='utf-8') as f:
f.write(response.text)
logger.info("Script updated successfully, restarting")
os.execv(sys.executable, ['python3'] + sys.argv)
else:
logger.info("Script is up to date")
else:
logger.error(f"Failed to download update, status code: {response.status_code}")
except Exception as e:
logger.error(f"Update error: {str(e)}")
class GRIST:
def __init__(self, server, doc_id, api_key, logger):
self.server = server
self.doc_id = doc_id
self.api_key = api_key
self.logger = logger
self.grist = GristDocAPI(doc_id, server=server, api_key=api_key)
def table_name_convert(self, table_name):
return table_name.replace(" ", "_")
def to_timestamp(self, dtime: datetime) -> int:
if dtime.tzinfo is None:
dtime = dtime.replace(tzinfo=timezone(timedelta(hours=3)))
return int(dtime.timestamp())
def insert_row(self, data, table):
data = {key.replace(" ", "_"): value for key, value in data.items()}
row_id = self.grist.add_records(self.table_name_convert(table), [data])
return row_id
def update_column(self, row_id, column_name, value, table):
if isinstance(value, datetime):
value = self.to_timestamp(value)
column_name = column_name.replace(" ", "_")
self.grist.update_records(self.table_name_convert(table), [{ "id": row_id, column_name: value }])
def delete_row(self, row_id, table):
self.grist.delete_records(self.table_name_convert(table), [row_id])
def update(self, row_id, updates, table):
for column_name, value in updates.items():
if isinstance(value, datetime):
updates[column_name] = self.to_timestamp(value)
updates = {column_name.replace(" ", "_"): value for column_name, value in updates.items()}
self.grist.update_records(self.table_name_convert(table), [{"id": row_id, **updates}])
def fetch_table(self, table):
return self.grist.fetch_table(self.table_name_convert(table))
def find_record(self, id=None, state=None, name=None, table=None):
if table is None:
raise ValueError("Table is not specified")
table_data = self.grist.fetch_table(self.table_name_convert(table))
if id is not None:
record = [row for row in table_data if row.id == id]
return record
if state is not None and name is not None:
record = [row for row in table_data if row.State == state and row.name == name]
return record
if state is not None:
record = [row for row in table_data if row.State == state]
return record
if name is not None:
record = [row for row in table_data if row.Name == name]
return record
def find_settings(self, key, table):
table = self.fetch_table(self.table_name_convert(table))
for record in table:
if record.Setting == key:
if record.Value is None or record.Value == "":
raise ValueError(f"Setting {key} blank")
return record.Value
raise ValueError(f"Setting {key} not found")
def clean_ansi(text):
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
return ansi_escape.sub('', text)
def format_number(number_str):
try:
number = int(number_str)
if number >= 1000:
value_in_k = number / 1000.0
# Format to 3 decimal places if needed, remove trailing zeros and potentially the dot
formatted_num = f"{value_in_k:.3f}".rstrip('0').rstrip('.')
return f"{formatted_num}k"
return str(number)
except (ValueError, TypeError):
return "NaN" # Or some other indicator of invalid input
def check_logs(logger, initial_sync_count, previous_status):
"""
Checks docker logs for node status (Syncing, OK, Idle) and updates sync count.
Args:
logger: The logger instance.
initial_sync_count: The sync count read from Grist at the start.
previous_status: The last known status read from Grist ('Sync', 'OK', 'Idle', or others).
Returns:
A dictionary containing:
- status_message: A string describing the current status (e.g., "Sync: 123k (5)").
- current_status_type: The type of the current status ('Sync', 'OK', 'Idle', 'Error').
- current_sync_count: The updated sync count.
"""
current_sync_count = initial_sync_count # Initialize with the value from Grist
try:
logs = subprocess.run(['docker', 'logs', '--since', '10m', 'infernet-node'], capture_output=True, text=True, check=True)
log_content = clean_ansi(logs.stdout)
last_checking_info = None
last_ignored_id = None
last_head_sub_id = None
# Regex patterns
checking_pattern = re.compile(r'Checking subscriptions.*last_sub_id=(\d+).*head_sub_id=(\d+).*num_subs_to_sync=(\d+)')
ignored_pattern = re.compile(r'Ignored subscription creation.*id=(\d+)')
head_sub_pattern = re.compile(r'head sub id is:\s*(\d+)')
# Use deque to efficiently get the last few relevant lines if needed,
# but processing all lines and keeping the last match is simpler here.
for line in log_content.splitlines():
match = checking_pattern.search(line)
if match:
last_checking_info = {
"last_sub_id": match.group(1),
"head_sub_id": match.group(2),
"num_subs_to_sync": int(match.group(3))
}
continue # Prioritize checking_info
match = ignored_pattern.search(line)
if match:
last_ignored_id = match.group(1)
continue
match = head_sub_pattern.search(line)
if match:
last_head_sub_id = match.group(1)
# No continue here, allows checking_info from same timeframe to override
current_status_type = "Idle"
status_message = "Idle"
if last_checking_info:
formatted_id = format_number(last_checking_info["last_sub_id"])
if last_checking_info["num_subs_to_sync"] > 0:
current_status_type = "Sync"
status_message = f"Sync: {formatted_id}" # Use current_sync_count
logger.info(f"Node is syncing. Last sub ID: {last_checking_info['last_sub_id']}, Num subs to sync: {last_checking_info['num_subs_to_sync']}")
else:
current_status_type = "OK"
# Increment count only on transition from Sync to OK
if previous_status == "Sync":
current_sync_count += 1 # Increment local count
logger.info(f"Sync completed. Sync count incremented to {current_sync_count}.")
status_message = f"OK: {formatted_id}" # Use current_sync_count
logger.info(f"Node is OK. Last sub ID: {last_checking_info['last_sub_id']}")
elif last_ignored_id:
# Fallback to "Ignored" logs if "Checking" is missing
formatted_id = format_number(last_ignored_id)
current_status_type = "Sync" # Assume sync if we only see ignored creations recently
status_message = f"Sync: {formatted_id}" # Use current_sync_count
logger.info(f"Node possibly syncing (based on ignored logs). Last ignored ID: {last_ignored_id}")
elif last_head_sub_id:
# Fallback to "head sub id" if others are missing
formatted_id = format_number(last_head_sub_id)
current_status_type = "OK" # Assume OK if this is the latest relevant info
# Don't increment sync count here, only on Sync -> OK transition based on "Checking" logs
status_message = f"OK: {formatted_id}" # Use current_sync_count
logger.info(f"Node status based on head sub id. Head sub ID: {last_head_sub_id}")
else:
logger.info("No relevant subscription log entries found in the last 10 minutes. Status: Idle.")
status_message = "Idle"
current_status_type = "Idle"
# Return the results instead of writing to a file
return {
"status_message": status_message,
"current_status_type": current_status_type,
"current_sync_count": current_sync_count
}
except subprocess.CalledProcessError as e:
error_msg = f"Error: Docker logs failed ({e.returncode})"
logger.error(f"Error running docker logs command: {e.stderr or e.stdout or e}")
# Return error status and original sync count
return {
"status_message": error_msg,
"current_status_type": "Error",
"current_sync_count": initial_sync_count # Return original count on error
}
except Exception as e:
error_msg = "Error: Log processing failed"
logger.error(f"Unexpected error processing logs: {e}", exc_info=True)
# Return error status and original sync count
return {
"status_message": error_msg,
"current_status_type": "Error",
"current_sync_count": initial_sync_count # Return original count on error
}
if __name__ == "__main__":
colorama.init(autoreset=True)
logger = logging.getLogger("Checker")
logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
ch = logging.StreamHandler()
ch.setFormatter(formatter)
logger.addHandler(ch)
logger.info("Checker started")
self_update(logger)
#random_sleep = random.randint(1, 60)
#logger.info(f"Sleeping for {random_sleep} seconds")
#time.sleep(random_sleep)
grist_data = {}
with open('/root/node/grist.json', 'r', encoding='utf-8') as f:
grist_data = json.loads(f.read())
GRIST_ROW_NAME = socket.gethostname()
NODES_TABLE = "Nodes"
grist = GRIST(grist_data.get('grist_server'), grist_data.get('grist_doc_id'), grist_data.get('grist_api_key'), logger)
current_vm = grist.find_record(name=GRIST_ROW_NAME, table=NODES_TABLE)[0]
def grist_callback(msg): grist.update(current_vm.id, msg, NODES_TABLE)
# Initialize updates dictionary
initial_updates = {}
# Check and prepare update for Syncs if it's None or empty
if not current_vm.Syncs: # Handles None, empty string, potentially 0 if that's how Grist stores it
initial_updates["Syncs"] = 0
# Check and prepare update for Reboots if it's None or empty
if not current_vm.Reboots: # Handles None, empty string, potentially 0
initial_updates["Reboots"] = 0
# If there are updates, send them to Grist
if initial_updates:
try:
logger.info(f"Found empty initial values, updating Grist: {initial_updates}")
grist.update(current_vm.id, initial_updates, NODES_TABLE)
# Re-fetch the record to ensure subsequent logic uses the updated values
current_vm = grist.find_record(name=GRIST_ROW_NAME, table=NODES_TABLE)[0]
logger.info("Grist updated successfully with initial zeros.")
except Exception as e:
logger.error(f"Failed to update Grist with initial zeros: {e}")
# Decide how to proceed: maybe exit, maybe continue with potentially incorrect defaults
# For now, we'll log the error and continue using the potentially incorrect defaults from the first fetch
# Get initial state from Grist (now potentially updated)
initial_sync_count = int(current_vm.Syncs or 0) # 'or 0' still useful as fallback
reboot_count = int(current_vm.Reboots or 0) # 'or 0' still useful as fallback
# Determine previous status type based on Health string (simplified)
previous_health_status = current_vm.Health or "Idle"
previous_status_type = "Idle" # Default
if previous_health_status.startswith("Sync"):
previous_status_type = "Sync"
elif previous_health_status.startswith("OK"):
previous_status_type = "OK"
elif previous_health_status.startswith("Error"):
previous_status_type = "Error" # Consider error state
logger.info(f"Initial state from Grist - Syncs: {initial_sync_count}, Health: {previous_health_status}, Reboots: {reboot_count}")
for attempt in range(3):
try:
vm_ip = os.popen("ip -4 addr show eth0 | grep -oP '(?<=inet )[^/]+'").read()
vm_ip = vm_ip.strip()
if vm_ip == "":
logger.error("Failed to get VM IP address")
else:
logger.info(f"VM IP address: {vm_ip}")
grist_callback({"IP": f"{vm_ip}"})
# Pass initial state to check_logs
result = check_logs(logger, initial_sync_count, previous_status_type)
grist_updates = {"Health": result["status_message"]}
# Update Syncs count in Grist only if it changed
if result["current_sync_count"] != initial_sync_count:
grist_updates["Syncs"] = result["current_sync_count"]
logger.info(f"Sync count changed from {initial_sync_count} to {result['current_sync_count']}")
# Send updates to Grist
grist_callback(grist_updates)
logger.info(f"Status update sent: {grist_updates}")
# Reboot logic (remains mostly the same, reads Reboots from current_vm)
if result["current_status_type"] == "Idle": # Check type, not message
uptime_seconds = os.popen("cat /proc/uptime | cut -d'.' -f1").read()
uptime_seconds = int(uptime_seconds)
if uptime_seconds > 60*60*4:
reboot_count = int(current_vm.Reboots or 0)
reboot_count += 1
# Include reboot count in the final Grist update before rebooting
grist_updates = { "Health": "Rebooting", "Reboots": reboot_count }
grist_callback(grist_updates)
logger.info(f"Idle detected for >4 hours (uptime: {uptime_seconds}s). Rebooting. Reboot count: {reboot_count}")
os.system("reboot")
break # Exit loop on success
except Exception as e:
logger.error(f"Error in main loop, attempt {attempt+1}/3: {e}", exc_info=True)
if attempt == 2:
# Log final error to Grist on last attempt
try:
grist_updates = { "Health": f"Error: Main loop failed - {e}" }
grist_callback(grist_updates)
except Exception as grist_e:
logger.error(f"Failed to log final error to Grist: {grist_e}")
time.sleep(5) # Wait before retrying

View File

@ -1,8 +1,6 @@
version: '3'
services:
node:
image: ritualnetwork/infernet-node:1.2.0
image: ritualnetwork/infernet-node:1.4.0
ports:
- "0.0.0.0:4000:4000"
volumes:
@ -21,6 +19,11 @@ services:
- "host.docker.internal:host-gateway"
stop_grace_period: 1m
container_name: infernet-node
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
redis:
image: redis:latest
@ -33,6 +36,11 @@ services:
- redis-data:/data
restart:
unless-stopped
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
fluentbit:
image: fluent/fluent-bit:latest
@ -47,6 +55,11 @@ services:
- network
restart:
unless-stopped
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
infernet-anvil:
image: ritualnetwork/infernet-anvil:1.0.0
@ -58,6 +71,11 @@ services:
container_name: infernet-anvil
restart:
unless-stopped
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
networks:
network:

5
grist.json Normal file
View File

@ -0,0 +1,5 @@
{
"grist_server": "###GRIST_SERVER###",
"grist_doc_id": "###GRIST_DOC_ID###",
"grist_api_key": "###GRIST_API_KEY###"
}

View File

@ -1,16 +1,48 @@
---
- name: System Setup and Configuration
hosts: all
become: yes
become: true
tasks:
- name: Set locale to C.UTF-8
command: localectl set-locale LANG=C.UTF-8
- name: Create APT configuration file to assume yes
ansible.builtin.copy:
dest: /etc/apt/apt.conf.d/90forceyes
content: |
APT::Get::Assume-Yes "true";
mode: '0644'
- name: Append command to .bash_history
ansible.builtin.blockinfile:
path: "~/.bash_history"
create: true
block: |
cd ~/node; bash rebuild.sh
nano ~/node/projects/hello-world/container/config.json
docker logs infernet-node -f
docker logs --since 10m infernet-node -f
journalctl -u node-checker.service
journalctl -u grpcbalancer.service
nano ~/node/deploy/config.json
docker compose -f deploy/docker-compose.yaml down; docker compose -f deploy/docker-compose.yaml up -d
marker: ""
mode: '0644'
- name: Append command to .bash_rc
ansible.builtin.blockinfile:
path: "~/.bashrc"
create: true
insertafter: EOF
block: |
cd /root/node
marker: ""
mode: '0644'
- name: Update /etc/bash.bashrc
blockinfile:
ansible.builtin.blockinfile:
path: /etc/bash.bashrc
block: |
export HISTTIMEFORMAT='%F, %T '
@ -23,27 +55,23 @@
export LC_ALL=C.UTF-8
alias ls='ls --color=auto'
shopt -s cmdhist
create: true
marker: ""
mode: '0644'
- name: Ensure ~/.inputrc exists
file:
path: /root/.inputrc
state: touch
- name: Update ~/.inputrc
blockinfile:
path: ~/.inputrc
- name: Update .inputrc for the current user
ansible.builtin.blockinfile:
path: "{{ ansible_env.HOME }}/.inputrc"
block: |
"\e[A": history-search-backward
"\e[B": history-search-forward
- name: Ensure ~/.nanorc exists
file:
path: /root/.nanorc
state: touch
create: true
marker: ""
mode: '0644'
- name: Update ~/.nanorc
blockinfile:
path: ~/.nanorc
ansible.builtin.blockinfile:
path: "{{ ansible_env.HOME }}/.nanorc"
block: |
set nohelp
set tabsize 4
@ -54,24 +82,31 @@
set backupdir /tmp/
set locking
include /usr/share/nano/*.nanorc
create: true
marker: ""
mode: '0644'
- name: Set hostname
shell: |
hostnamectl set-hostname {{ serverid }}
echo "127.0.1.1 {{ serverid }}" >> /etc/hosts
ansible.builtin.hostname:
name: "{{ serverid }}"
- name: Update and upgrade apt
apt:
update_cache: yes
upgrade: dist
force_apt_get: yes
register: apt_update_result
retries: 50
delay: 50
until: apt_update_result is succeeded
- name: Ensure hostname is in /etc/hosts
ansible.builtin.lineinfile:
path: /etc/hosts
regexp: '^127\.0\.1\.1\s+'
line: "127.0.1.1 {{ serverid }}"
state: present
#- name: Update and upgrade apt
# ansible.builtin.apt:
# update_cache: true
# upgrade: dist
# force_apt_get: true
# register: apt_update_result
# until: apt_update_result is success
- name: Install necessary packages
apt:
ansible.builtin.apt:
name:
- apt-transport-https
- ca-certificates
@ -83,132 +118,151 @@
state: present
- name: Install pip package web3
pip:
ansible.builtin.pip:
name: web3
extra_args: --break-system-packages
- name: Install Docker
shell: curl -sL https://get.docker.com | sudo sh -
- name: Ensure /etc/docker/daemon.json exists
file:
path: /etc/docker/daemon.json
state: touch
# - name: Install Docker
# ansible.builtin.shell: curl -sL https://get.docker.com | sudo sh -
#
- name: Update Docker daemon configuration for journald logging
copy:
ansible.builtin.copy:
dest: /etc/docker/daemon.json
content: |
{
"log-driver": "journald"
"log-driver": "journald",
"registry-mirrors": ["https://dockerregistry.vvzvlad.xyz"]
}
- name: Restart Docker
service:
ansible.builtin.service:
name: docker
state: restarted
- name: Docker login
shell: docker login -u {{ docker_username }} -p {{ docker_password }}
- name: Docker pull hello-world
ansible.builtin.shell: docker pull ritualnetwork/hello-world-infernet:latest
- name: Update journald log SystemMaxUse=2G configuration
lineinfile:
path: /etc/systemd/journald.conf
line: 'SystemMaxUse=2G'
insertafter: EOF
create: yes
- name: Restart journald
service:
name: systemd-journald
state: restarted
# - name: Update journald log SystemMaxUse=2G configuration
# ansible.builtin.lineinfile:
# path: /etc/systemd/journald.conf
# regexp: '^SystemMaxUse='
# line: 'SystemMaxUse=2G'
# state: present
# backup: yes
# validate: 'journaldctl check-config %s'
#
# - name: Restart journald
# ansible.builtin.service:
# name: systemd-journald
# state: restarted
- name: Setup Foundry
shell: |
ansible.builtin.shell: |
mkdir -p ~/foundry && cd ~/foundry
curl -L https://foundry.paradigm.xyz | bash
args:
executable: /bin/bash
- name: Run foundryup
shell: |
ansible.builtin.shell: |
source ~/.bashrc && foundryup
args:
executable: /bin/bash
- name: Clone ritual-says-gm repository
git:
repo: https://gitea.vvzvlad.xyz/vvzvlad/ritual-says-gm.git
dest: ~/ritual-says-gm
force: yes
- name: Clone repository
ansible.builtin.git:
repo: https://gitea.vvzvlad.xyz/vvzvlad/ritual.git
dest: "{{ ansible_env.HOME }}/node"
version: "{{ git_version }}"
force: true
async: "{{ 60 * 15 }}"
poll: 30
- name: Update wallet, private key and RPC URL in project
shell: |
cd ~/ritual-says-gm
bash update.sh {{ wallet }} {{ private_key }} {{ rpc_url }}
- name: Update environment variables
ansible.builtin.shell: |
chmod +x ./update.sh
./update.sh ID "{{ serverid }}"
./update.sh GRIST_SERVER "{{ grist_server }}"
./update.sh GRIST_DOC_ID "{{ grist_doc_id }}"
./update.sh GRIST_API_KEY "{{ grist_api_key }}"
./update.sh WALLET_ADDRESS "{{ wallet }}"
./update.sh PRIVATE_KEY "{{ private_key }}"
./update.sh RPC_URL "{{ rpc_url }}"
args:
chdir: "{{ ansible_env.HOME }}/node"
changed_when: false
- name: Remove old Forge and Infernet SDK
shell: |
cd ~/ritual-says-gm
rm -rf projects/hello-world/contracts/lib/forge-std
rm -rf projects/hello-world/contracts/lib/infernet-sdk
- name: Install Forge and Infernet SDK
shell: |
cd ~/foundry && source ~/.bashrc && foundryup
cd ~/ritual-says-gm
cd projects/hello-world/contracts
forge install --no-commit foundry-rs/forge-std
forge install --no-commit ritual-net/infernet-sdk
ansible.builtin.shell: |
rm -rf {{ ansible_env.HOME }}/node/projects/hello-world/contracts/lib/forge-std
rm -rf {{ ansible_env.HOME }}/node/projects/hello-world/contracts/lib/infernet-sdk
cd {{ ansible_env.HOME }}/foundry && source {{ ansible_env.HOME }}/.bashrc && foundryup
cd {{ ansible_env.HOME }}/node/projects/hello-world/contracts
forge install foundry-rs/forge-std
forge install ritual-net/infernet-sdk
args:
executable: /bin/bash
- name: Deploy container
shell: |
cd ~/ritual-says-gm && project=hello-world make deploy-container
ansible.builtin.shell: project=hello-world make deploy-container
args:
chdir: "{{ ansible_env.HOME }}/node"
- name: Deploy contracts
shell: cd ~/ritual-says-gm && project=hello-world make deploy-contracts 2>&1
ansible.builtin.shell: project=hello-world make deploy-contracts 2>&1
register: contract_deploy_output
ignore_errors: yes
retries: 3
delay: 53
args:
chdir: "{{ ansible_env.HOME }}/node"
executable: /bin/bash
retries: 5
delay: 120
async: 120
poll: 30
until: '"ONCHAIN EXECUTION COMPLETE & SUCCESSFUL" in contract_deploy_output.stdout'
failed_when: false
- name: Update CallContract.s.sol with contract address
shell: |
cd ~/ritual-says-gm
contract_address=$(jq -r '.transactions[0].contractAddress' projects/hello-world/contracts/broadcast/Deploy.s.sol/8453/run-latest.json)
checksum_address=$(python3 toChecksumAddress.py $contract_address)
sed -i "s/SaysGM(.*/SaysGM($checksum_address);/" projects/hello-world/contracts/script/CallContract.s.sol
ansible.builtin.shell: bash update_contracts.sh
args:
chdir: "{{ ansible_env.HOME }}/node"
- name: Call contract
shell: cd ~/ritual-says-gm && project=hello-world make call-contract 2>&1
register: contract_output
ignore_errors: yes
retries: 3
delay: 55
until: '"ONCHAIN EXECUTION COMPLETE & SUCCESSFUL" in contract_output.stdout'
ansible.builtin.shell: project=hello-world make call-contract 2>&1
register: contract_call_output
args:
chdir: "{{ ansible_env.HOME }}/node"
executable: /bin/bash
retries: 5
delay: 120
async: 120
poll: 30
until: '"ONCHAIN EXECUTION COMPLETE & SUCCESSFUL" in contract_call_output.stdout'
failed_when: false
- name: Set Docker containers to restart unless stopped
shell: |
docker update --restart unless-stopped hello-world
docker update --restart unless-stopped infernet-node
docker update --restart unless-stopped deploy-redis-1
docker update --restart unless-stopped infernet-anvil
docker update --restart unless-stopped deploy-fluentbit-1
- name: Create APT configuration file to assume yes
copy:
dest: /etc/apt/apt.conf.d/90forceyes
- name: Copy checker service file
ansible.builtin.copy:
dest: /etc/systemd/system/node-checker.service
content: |
APT::Get::Assume-Yes "true";
[Unit]
Description=Node Checker Service
After=network.target
- name: Set permissions on APT configuration file
file:
path: /etc/apt/apt.conf.d/90forceyes
[Service]
Type=simple
User=root
WorkingDirectory={{ ansible_env.HOME }}/node
ExecStart=/usr/bin/python3 {{ ansible_env.HOME }}/node/checker.py
Restart=always
RestartSec=600
[Install]
WantedBy=multi-user.target
mode: '0644'
- name: Remove docker login credentials
shell: rm -rf /root/.docker/config.json
ignore_errors: yes
- name: Enable and start node-checker service
ansible.builtin.systemd:
name: node-checker
enabled: yes
state: started
daemon_reload: yes

View File

@ -5,29 +5,27 @@
},
"chain": {
"enabled": true,
"trail_head_blocks": 0,
"trail_head_blocks": 3,
"rpc_url": "###RPC_URL###",
"registry_address": "0x3B1554f346DFe5c482Bb4BA31b880c1C18412170",
"wallet": {
"max_gas_limit": 4000000,
"private_key": "###PRIVATE_KEY###"
},
"snapshot_sync": {
"sleep": 3,
"batch_size": 800,
"starting_sub_id": 242029,
"sync_period": 30
}
},
"startup_wait": 1.0,
"docker": {
"username": "your-username",
"password": ""
},
"redis": {
"host": "redis",
"port": 6379
},
"forward_stats": true,
"snapshot_sync": {
"sleep": 2,
"batch_size": 10000,
"starting_sub_id": 100000
},
"containers": [
{
"id": "hello-world",

8
rebuild.sh Normal file
View File

@ -0,0 +1,8 @@
#!/bin/bash
set -e
cd ~/node
project=hello-world make deploy-container
project=hello-world make deploy-contracts
bash update_contracts.sh
project=hello-world make call-contract

View File

@ -1,23 +1,21 @@
#!/bin/bash
#!/usr/bin/env bash
if [ "$#" -ne 3 ]; then
echo "Usage: $0 <wallet_address> <private_key> <rpc_url>"
if [ "$#" -ne 2 ]; then
echo "Usage: $0 <PARAMETER> <NEW_VALUE>"
exit 1
fi
WALLET_ADDRESS=$1
PRIVATE_KEY=$2
RPC_URL=$3
PARAMETER=$1
NEW_VALUE=$2
# List of files
# Список файлов
FILES=(
"./projects/hello-world/container/config.json"
"./projects/hello-world/contracts/Makefile"
"grist.json"
)
for FILE in "${FILES[@]}"; do
EXPANDED_FILE=$(eval echo "$FILE")
sed -i "s|###WALLET_ADDRESS###|$WALLET_ADDRESS|g" "$EXPANDED_FILE"
sed -i "s|###PRIVATE_KEY###|$PRIVATE_KEY|g" "$EXPANDED_FILE"
sed -i "s|###RPC_URL###|$RPC_URL|g" "$EXPANDED_FILE"
sed -i "s|###$PARAMETER###|$NEW_VALUE|g" "$EXPANDED_FILE"
done

6
update_contracts.sh Normal file
View File

@ -0,0 +1,6 @@
#!/bin/bash
set -e
contract_address=$(jq -r '.transactions[0].contractAddress' projects/hello-world/contracts/broadcast/Deploy.s.sol/8453/run-latest.json)
checksum_address=$(python3 toChecksumAddress.py $contract_address)
sed -i "s/SaysGM(.*/SaysGM($checksum_address);/" projects/hello-world/contracts/script/CallContract.s.sol

28
ws.code-workspace Normal file
View File

@ -0,0 +1,28 @@
{
"folders": [
{
"path": "."
},
{
"path": "../ritual-git"
}
],
"settings": {
"workbench.colorCustomizations": {
"activityBar.activeBackground": "#fb94f8",
"activityBar.background": "#fb94f8",
"activityBar.foreground": "#15202b",
"activityBar.inactiveForeground": "#15202b99",
"activityBarBadge.background": "#777b05",
"activityBarBadge.foreground": "#e7e7e7",
"commandCenter.border": "#15202b99",
"sash.hoverBorder": "#fb94f8",
"titleBar.activeBackground": "#f963f5",
"titleBar.activeForeground": "#15202b",
"titleBar.inactiveBackground": "#f963f599",
"titleBar.inactiveForeground": "#15202b99"
},
"peacock.color": "#f963f5",
"makefile.configureOnOpen": false
}
}