This update introduces a new `format_number` function that formats subscription IDs into a more readable format (e.g., converting 1000 to '1k'). The `check_logs` function has been modified to utilize this new formatting for both head subscription ID and last subscription ID in the status messages, enhancing clarity in log analysis and improving the overall readability of subscription status reporting. |
||
---|---|---|
.github/workflows | ||
deploy | ||
grpcbalancer | ||
projects | ||
.gitignore | ||
.gitmodules | ||
.pre-commit-config.yaml | ||
0x8d871ef2826ac9001fb2e33fdd6379b6aabf449c_abi.json | ||
CHANGELOG.md | ||
checker.py | ||
grist.json | ||
internal.mk | ||
Makefile | ||
playbook.yml | ||
PUBLISHING.md | ||
pyproject.toml | ||
README.md | ||
rebuild.sh | ||
requirements.txt | ||
toChecksumAddress.py | ||
update_contracts.sh | ||
update.sh |
infernet-container-starter
Welcome to this repository! 🎉 This repo contains a series of examples that demonstrate the true power of infernet, and the wide range of applications that can be built using it:
Examples
- Hello World: Infernet's version of a
hello-world
program. Here, we deploy a container that simply echoes back the input to us. - Running a Torch Model on Infernet: This example shows you how to deploy a pre-trained pytorch model to infernet. Using this example will make it easier for you to deploy your own models to infernet.
- Running an ONNX Model on Infernet: Same as the previous example, but this time we deploy an ONNX model to infernet.
- Prompt to NFT: In this example, we use stablediffusion to mint NFTs on-chain using a prompt.
- TGI Inference with Mistral-7b: This example shows you how to deploy an arbitrary LLM model using Huggingface's TGI, and use it with an infernet node.
- Running OpenAI's GPT-4 on Infernet: This example shows you how to deploy OpenAI's GPT-4 model to infernet.