feat: publishing infernet-container-starter v0.1.0

This commit is contained in:
ritual-all 2024-03-29 10:49:24 -04:00
commit 41aaa152e6
No known key found for this signature in database
GPG Key ID: 44F6A6F5B09FFEB8
24 changed files with 1135 additions and 0 deletions

23
.gitignore vendored Normal file
View File

@ -0,0 +1,23 @@
# Byte-compiled / optimized / DLL files
deploy/config.json
__pycache__/
*.py[cod]
*$py
# C extensions
*.so
# Distribution / packaging
dist/
build/
*.egg-info/
# IDE specific files
.vscode/
.idea/
# Virtual environment
venv/
**/.idea

6
.gitmodules vendored Normal file
View File

@ -0,0 +1,6 @@
[submodule "projects/hello-world/contracts/lib/forge-std"]
path = projects/hello-world/contracts/lib/forge-std
url = https://github.com/foundry-rs/forge-std
[submodule "projects/hello-world/contracts/lib/infernet-sdk"]
path = projects/hello-world/contracts/lib/infernet-sdk
url = https://github.com/ritual-net/infernet-sdk

9
Makefile Normal file
View File

@ -0,0 +1,9 @@
deploy-container:
cp ./projects/$(project)/container/config.json deploy/config.json
cd deploy && docker-compose up
deploy-contracts:
$(MAKE) -C ./projects/$(project)/contracts deploy
call-contract:
$(MAKE) -C ./projects/$(project)/contracts call-contract

221
README.md Normal file
View File

@ -0,0 +1,221 @@
# infernet-container-starter
Starter examples for deploying to infernet.
# Getting Started
To interact with infernet, one could either create a job by accessing an infernet
node directly through it's API (we'll refer to this as an off-chain job), or by
creating a subscription on-chain (we'll refer to this as an on-chain job).
## Requesting an off-chain job: Hello World!
The easiest way to get started is to run our hello-world container.
This is a simple [flask-app](projects/hello-world/container/src/app.py) that
is compatible with `infernet`, and simply
[echoes what you send to it](./projects/hello-world/container/src/app.py#L16).
We already have it [hosted on docker hub](https://hub.docker.com/r/ritualnetwork/hello-world-infernet) .
If you're curious how it's made, you can
follow the instructions [here](projects/hello-world/container/README.md) to build your own infernet-compatible
container.
### Install Docker
To run this, you'll need to have docker installed. You can find instructions
for installing docker [here](https://docs.docker.com/install/).
### Running Locally
First, ensure that the docker daemon is running.
Then, from the top-level project directory, Run the following make command:
```
project=hello-world make deploy-container
```
This will deploy an infernet node along with the `hello-world` image.
### Creating an off-chain job through the API
You can create an off-chain job by posting to the `node` directly.
```bash
curl -X POST http://127.0.0.1:4000/api/jobs \
-H "Content-Type: application/json" \
-d '{"containers":["hello-world"], "data": {"some": "input"}}'
# returns
{"id":"d5281dd5-c4f4-4523-a9c2-266398e06007"}
```
This will return the id of that job.
### Getting the status/result/errors of a job
You can check the status of a job like so:
```bash
curl -X GET http://127.0.0.1:4000/api/jobs?id=d5281dd5-c4f4-4523-a9c2-266398e06007
# returns
[{"id":"d5281dd5-c4f4-4523-a9c2-266398e06007", "result":{"container":"hello-world","output": {"output":"hello, world!, your input was: {'source': 1, 'data': {'some': 'input'}}"}} ,"status":"success"}]
```
### Configuration
This project already comes with a pre-filled config file. The config
file for the hello-world project is located [here](projects/hello-world/container/config.json):
```bash
projects/hello-world/config.json
```
## Requesting an on-chain job
In this section we'll go over how to request an on-chain job in a local testnet.
### Infernet's Anvil Testnet
To request an on-chain job, you'll need to deploy contracts using the infernet sdk.
We already have a public [anvil node](https://hub.docker.com/r/ritualnetwork/infernet-anvil) docker image which has the
corresponding infernet sdk contracts deployed, along with a node that has
registered itself to listen to on-chain subscription events.
* Coordinator Address: `0x5FbDB2315678afecb367f032d93F642f64180aa3`
* Node Address: `0x70997970C51812dc3A010C7d01b50e0d17dc79C8` (This is the second account in the anvil's accounts.)
### Deploying Infernet Node & Infernet's Anvil Testnet
This step is similar to the section above:
```bash
project=hello-world make deploy-container
```
In another terminal, run `docker container ls`, you should see something like this
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c2ca0ffe7817 ritualnetwork/infernet-anvil:0.0.0 "anvil --host 0.0.0.…" 9 seconds ago Up 8 seconds 0.0.0.0:8545->3000/tcp anvil-node
0b686a6a0e5f ritualnetwork/hello-world-infernet:0.0.2 "gunicorn app:create…" 9 seconds ago Up 8 seconds 0.0.0.0:3000->3000/tcp hello-world
28b2e5608655 ritualnetwork/infernet-node:0.1.1 "/app/entrypoint.sh" 10 seconds ago Up 10 seconds 0.0.0.0:4000->4000/tcp deploy-node-1
03ba51ff48b8 fluent/fluent-bit:latest "/fluent-bit/bin/flu…" 10 seconds ago Up 10 seconds 2020/tcp, 0.0.0.0:24224->24224/tcp deploy-fluentbit-1
a0d96f29a238 redis:latest "docker-entrypoint.s…" 10 seconds ago Up 10 seconds 0.0.0.0:6379->6379/tcp deploy-redis-1
```
You can see that the anvil node is running on port `8545`, and the infernet
node is running on port `4000`. Same as before.
### Deploying Consumer Contracts
We have a [sample forge project](./projects/hello-world/contracts) which contains
a simple consumer contract, [`SaysGM`](./projects/hello-world/contracts/src/SaysGM.sol).
All this contract does is to request a job from the infernet node, and upon receiving
the result, it will use the `forge` console to print the result.
**Anvil Logs**: First, it's useful to look at the logs of the anvil node to see what's going on. In
a new terminal, run `docker logs -f anvil-node`.
**Deploying the contracts**: In another terminal, run the following command:
```bash
project=hello-world make deploy-contracts
```
You should be able to see the following logs in the anvil logs:
```bash
eth_sendRawTransaction
eth_getTransactionReceipt
Transaction: 0x23ca6b1d1823ad5af175c207c2505112f60038fc000e1e22509816fa29a3afd6
Contract created: 0x663f3ad617193148711d28f5334ee4ed07016602
Gas used: 476669
Block Number: 1
Block Hash: 0x6b026b70fbe97b4a733d4812ccd6e8e25899a1f6c622430c3fb07a2e5c5c96b7
Block Time: "Wed, 17 Jan 2024 22:17:31 +0000"
eth_getTransactionByHash
eth_getTransactionReceipt
eth_blockNumber
```
We can see that a new contract has been created at `0x663f3ad617193148711d28f5334ee4ed07016602`.
That's the address of the `SaysGM` contract.
### Calling the contract
Now, let's call the contract. In the same terminal, run the following command:
```bash
project=hello-world make call-contract
```
You should first see that a transaction was sent to the `SaysGm` contract:
```bash
eth_getTransactionReceipt
Transaction: 0xe56b5b6ac713a978a1631a44d6a0c9eb6941dce929e1b66b4a2f7a61b0349d65
Gas used: 123323
Block Number: 2
Block Hash: 0x3d6678424adcdecfa0a8edd51e014290e5f54ee4707d4779e710a2a4d9867c08
Block Time: "Wed, 17 Jan 2024 22:18:39 +0000"
eth_getTransactionByHash
```
Then, right after that you should see another transaction submitted by the `node`,
which is the result of the job request:
```bash
eth_chainId
eth_sendRawTransaction
_____ _____ _______ _ _ _
| __ \|_ _|__ __| | | | /\ | |
| |__) | | | | | | | | | / \ | |
| _ / | | | | | | | |/ /\ \ | |
| | \ \ _| |_ | | | |__| / ____ \| |____
|_| \_\_____| |_| \____/_/ \_\______|
subscription Id 1
interval 1
redundancy 1
node 0x70997970C51812dc3A010C7d01b50e0d17dc79C8
input:
0x
output:
0x000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000607b276f7574707574273a202268656c6c6f2c20776f726c64212c20796f757220696e707574207761733a207b27736f75726365273a20302c202764617461273a20273437366636663634323036643666373236653639366536373231277d227d
proof:
0x
Transaction: 0x949351d02e2c7f50ced2be06d14ca4311bd470ec80b135a2ce78a43f43e60d3d
Gas used: 94275
Block Number: 3
Block Hash: 0x57ed0cf39e3fb3a91a0d8baa5f9cb5d2bdc1875f2ad5d6baf4a9466f522df354
Block Time: "Wed, 17 Jan 2024 22:18:40 +0000"
eth_blockNumber
eth_newFilter
```
We can see that the address of the `node` matches the address of the node in
our ritual anvil node.
### Next Steps
To learn more about on-chain requests, check out the following resources:
1. [Tutorial](./projects/hello-world/contracts/Tutorial.md) on this project's consumer smart contracts.
2. [Infernet Callback Consumer Tutorial](https://docs.ritual.net/infernet/sdk/consumers/Callback)
3. [Infernet Nodes Docoumentation](https://docs.ritual.net/infernet/nodes)

View File

@ -0,0 +1,57 @@
version: '3'
services:
node:
image: ritualnetwork/infernet-node:latest
ports:
- "0.0.0.0:4000:4000"
volumes:
- type: bind
source: ./config.json
target: /app/config.json
- node-logs:/logs
- /var/run/docker.sock:/var/run/docker.sock
networks:
- network
depends_on:
- redis
restart:
on-failure
extra_hosts:
- "host.docker.internal:host-gateway"
stop_grace_period: 1m
redis:
image: redis:latest
ports:
- "6379:6379"
networks:
- network
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
- redis-data:/data
restart:
on-failure
fluentbit:
image: fluent/fluent-bit:latest
ports:
- "24224:24224"
environment:
- FLUENTBIT_CONFIG_PATH=/fluent-bit/etc/fluent-bit.conf
volumes:
- ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
- /var/log:/var/log:ro
networks:
- network
restart:
on-failure
networks:
network:
volumes:
node-logs:
redis-data:

38
deploy/fluent-bit.conf Normal file
View File

@ -0,0 +1,38 @@
[SERVICE]
Flush 1
Daemon Off
Log_Level info
storage.path /tmp/fluentbit.log
storage.sync normal
storage.checksum on
storage.backlog.mem_limit 5M
[INPUT]
Name forward
Listen 0.0.0.0
Port 24224
Storage.type filesystem
[OUTPUT]
name stdout
match *
[OUTPUT]
Name pgsql
Match stats.node
Host meta-sink.ritual.net
Port 5432
User append_only_user
Password ogy29Z4mRCLfpup*9fn6
Database postgres
Table node_stats
[OUTPUT]
Name pgsql
Match stats.live
Host meta-sink.ritual.net
Port 5432
User append_only_user
Password ogy29Z4mRCLfpup*9fn6
Database postgres
Table live_stats

23
deploy/redis.conf Normal file
View File

@ -0,0 +1,23 @@
# Listen on localhost
bind 127.0.0.1 ::1
# Loglevel
loglevel notice
# Path to log file
logfile /var/log/redis/redis-server.log
# Number of databases
databases 2
# Working directory
dir /var/lib/redis
# Save to disk every 60 seconds if at least 1 change
save 60 1
# Maximum memory before eviction starts
maxmemory 1gb
# Eviction policy
maxmemory-policy allkeys-lru

View File

@ -0,0 +1,20 @@
FROM python:3.11-slim as builder
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONPATH src
WORKDIR /app
RUN apt-get update
COPY src/requirements.txt .
RUN pip install --upgrade pip && pip install -r requirements.txt
COPY src src
ENTRYPOINT ["gunicorn", "app:create_app()"]
CMD ["-b", "0.0.0.0:3000"]

View File

@ -0,0 +1,20 @@
DOCKER_ORG := ritualnetwork
TAG := $(DOCKER_ORG)/hello-world-infernet:latest
.phony: build run publish
build:
@docker build -t $(TAG) .
update-tag:
jq ".containers[0].image = \"$(TAG)\"" config.json > updated_config.json && mv updated_config.json config.json
run: build
docker run \
-p 3000:3000 $(TAG)
# You may need to set up a docker builder, to do so run:
# docker buildx create --name mybuilder --bootstrap --use
# refer to https://docs.docker.com/build/building/multi-platform/#building-multi-platform-images for more info
build-multiplatform:
docker buildx build --platform linux/amd64,linux/arm64 -t $(TAG) --push .

View File

@ -0,0 +1,163 @@
# Creating an infernet-compatible `hello-world` container
In this tutorial, we'll create a simple hello-world container that can be used
with infernet.
> [!NOTE]
> This directory `containers/hello-world` already includes the final result
> of this tutorial. Run the following tutorial in a new directory.
Let's get started! 🎉
## Step 1: create a simple flask-app and a requirements.txt file
First, we'll create a simple flask-app that returns a hello-world message.
We begin by creating a `src` directory:
```
mkdir src
```
Inside `src`, we create a `app.py` file with the following content:
```python
from typing import Any
from flask import Flask, request
def create_app() -> Flask:
app = Flask(__name__)
@app.route("/")
def index() -> str:
return "Hello world service!"
@app.route("/service_output", methods=["POST"])
def inference() -> dict[str, Any]:
input = request.json
return {"output": f"hello, world!, your input was: {input}"}
return app
```
As you can see, the app has two endpoints: `/` and `/service_output`. The first
one is simply used to ping the service, while the second one is used for infernet.
We can see that our app uses the `flask` package. Additionally, we'll need to
install the `gunicorn` package to run the app. We'll create a `requirements.txt`
file with the following content:
```
Flask>=3.0.0,<4.0.0
gunicorn>=21.2.0,<22.0.0
```
## Step 2: create a Dockerfile
Next, we'll create a Dockerfile that builds the flask-app and runs it.
At the top-level directory, create a `Dockerfile` with the following content:
```dockerfile
FROM python:3.11-slim as builder
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONPATH src
WORKDIR /app
RUN apt-get update
COPY src/requirements.txt .
RUN pip install --upgrade pip && pip install -r requirements.txt
COPY src src
ENTRYPOINT ["gunicorn", "app:create_app()"]
CMD ["-b", "0.0.0.0:3000"]
```
This is a simple Dockerfile that:
1. Uses the `python:3.11-slim` image as a base image
2. Installs the requirements
3. Copies the source code
4. Runs the app on port `3000`
> [!IMPORTANT]
> App must be exposed on port `3000`. Infernet's orchestrator
> will always assume that the container apps are exposed on that port within the container.
> Users can then remap this port to any port that they want on the host machine
> using the `port` parameter in the container specs.
By now, your project directory should look like this:
```
.
├── Dockerfile
├── README.md
└── src
├── __init__.py
└── app.py
└── requirements.txt
```
## Step 3: build the container
Now, we can build the container. At the top-level directory, run:
```
docker build -t hello-world .
```
## Step 4: run the container
Finally, we can run the container. In one terminal, run:
```
docker run --rm -p 3000:3000 --name hello hello-world
```
## Step 5: ping the container
In another terminal, run:
```
curl localhost:3000
```
It should return something like:
```
Hello world service!
```
Congratulations! You've created a simple hello-world container that can be
used with infernet. 🎉
## Step 6: request a service output
Now, let's request a service output. Note that this endpoint is called by
the infernet node, not by the user. For debugging purposes however, it's useful to
be able to call it manually.
In your terminal, run:
```
curl -X POST -H "Content-Type: application/json" -d '{"input": "hello"}' localhost:3000/service_output
```
The output should be something like:
```
{"output": "hello, world!, your input was: {'input': 'hello'}"}
```
Your users will never call this endpoint directly. Instead, they will:
1. Either [create an off-chain job request](../../../README.md#L36) through the node API
2. Or they will make a subscription on their contracts

View File

@ -0,0 +1,50 @@
{
"log_path": "infernet_node.log",
"server": {
"port": 4000
},
"chain": {
"enabled": true,
"trail_head_blocks": 0,
"rpc_url": "http://host.docker.internal:8545",
"coordinator_address": "0x5FbDB2315678afecb367f032d93F642f64180aa3",
"wallet": {
"max_gas_limit": 4000000,
"private_key": "0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d"
}
},
"startup_wait": 1.0,
"docker": {
"username": "your-username",
"password": ""
},
"redis": {
"host": "redis",
"port": 6379
},
"forward_stats": true,
"containers": [
{
"id": "hello-world",
"image": "ritualnetwork/hello-world-infernet:latest",
"external": true,
"port": "3000",
"allowed_delegate_addresses": [],
"allowed_addresses": [],
"allowed_ips": [],
"command": "--bind=0.0.0.0:3000 --workers=2",
"env": {}
},
{
"id": "anvil-node",
"image": "ritualnetwork/infernet-anvil:0.0.0",
"external": true,
"port": "8545",
"allowed_delegate_addresses": [],
"allowed_addresses": [],
"allowed_ips": [],
"command": "",
"env": {}
}
]
}

View File

@ -0,0 +1,47 @@
from time import sleep
import requests
def hit_server_directly():
print("hello")
r = requests.get("http://localhost:3000/")
print(r.status_code)
# server response
print("server response", r.text)
def poll_until_complete(id: str):
status = "running"
r = None
while status == "running":
r = requests.get(
"http://localhost:4000/api/jobs",
params={
"id": id,
},
).json()[0]
status = r.get("status")
print("status", status)
if status != "running":
return r
sleep(1)
def create_job_through_node():
r = requests.post(
"http://localhost:4000/api/jobs",
json={
"containers": ["hello-world"],
"data": {"some": "object"},
},
)
job_id = r.json().get("id")
result = poll_until_complete(job_id)
print("result", result)
if __name__ == "__main__":
create_job_through_node()

View File

@ -0,0 +1,18 @@
from typing import Any
from flask import Flask, request
def create_app() -> Flask:
app = Flask(__name__)
@app.route("/")
def index() -> str:
return "Hello world service!"
@app.route("/service_output", methods=["POST"])
def inference() -> dict[str, Any]:
input = request.json
return {"output": f"hello, world!, your input was: {input}"}
return app

View File

@ -0,0 +1,2 @@
Flask>=3.0.0,<4.0.0
gunicorn>=21.2.0,<22.0.0

View File

@ -0,0 +1,34 @@
name: test
on: workflow_dispatch
env:
FOUNDRY_PROFILE: ci
jobs:
check:
strategy:
fail-fast: true
name: Foundry project
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly
- name: Run Forge build
run: |
forge --version
forge build --sizes
id: build
- name: Run Forge tests
run: |
forge test -vvv
id: test

View File

@ -0,0 +1,14 @@
# Compiler files
cache/
out/
# Ignores development broadcast logs
!/broadcast
/broadcast/*/31337/
/broadcast/**/dry-run/
# Docs
docs/
# Dotenv file
.env

View File

@ -0,0 +1,14 @@
# phony targets are targets that don't actually create a file
.phony: deploy
# anvil's third default address
sender := 0x5de4111afa1a4b94908f83103eb1f1706367c2e68ca870fc3fb9a804cdab365a
RPC_URL := http://localhost:8545
# deploying the contract
deploy:
@PRIVATE_KEY=$(sender) forge script script/Deploy.s.sol:Deploy --broadcast --rpc-url $(RPC_URL)
# calling sayGM()
call-contract:
@PRIVATE_KEY=$(sender) forge script script/CallContract.s.sol:CallContract --broadcast --rpc-url $(RPC_URL)

View File

@ -0,0 +1,44 @@
# `Hello-World` Consumer Contracts
This is a [foundry](https://book.getfoundry.sh/) project that implements a simple Consumer
contract, [`SaysGm`](./src/SaysGM.sol).
This readme explains how to compile and deploy the contract to the Infernet Anvil Testnet network.
For a detailed tutorial on how to write a consumer contract, refer to the [tutorial doc](./Tutorial.md).
> [!IMPORTANT]
> Ensure that you are running the following scripts with the Infernet Anvil Testnet network.
> The [tutorial](./../../../README.md) at the root of this repository explains how to
> bring up an infernet node.
### Installing the libraries
```bash
forge install
```
### Compiling the contracts
```bash
forge compile
```
### Deploying the contracts
The deploy script at `script/Deploy.s.sol` deploys the `SaysGM` contract to the Infernet Anvil Testnet network.
We have the [following make target](./Makefile#L9) to deploy the contract. Refer to the Makefile
for more understanding around the deploy scripts.
```bash
make deploy
```
### Requesting a job
We also have a script called `CallContract.s.sol` that requests a job to the `SaysGM` contract.
Refer to the [script](./script/CallContract.s.sol) for more details. Similar to deployment,
you can run that script using the following convenience make target.
```bash
make call-contract
```
Refer to the [Makefile](./Makefile#L14) for more details.

View File

@ -0,0 +1,229 @@
# `GM! 🤠`
In this tutorial we'll make a very simple consumer contract called `SaysGm`.
All this contract does is request compute from our `hello-world` container and
upon receiving a response, it prints everything to the console.
> [!NOTE]
> Run this tutorial in a new directory, the end result of this tutorial will
> be pretty much the same as the [contracts](.) project, so refer to that if
> you get stuck.
## Prerequisites
### Installing foundry
You'll need [foundry](https://book.getfoundry.sh/getting-started/installation) installed.
### Scaffolding a new project
Create a new directory, and run `forge init` in it. This will create a new
project with a `foundry.yaml` file in it.
```bash
mkdir says-gm
cd says-gm
forge init
```
### Installing Infernet sdk
Install our Infernet SDK via forge.
```bash
forge install ritual-net/infernet-sdk
```
### Specifying remappings
Create a new file called `remappings.txt` in the root of your project.
```
forge-std/=lib/forge-std/src
infernet-sdk/=lib/infernet-sdk/src
```
This'll make it easier to import our dependencies. More explanation on
remappings [here](https://book.getfoundry.sh/projects/dependencies?highlight=remappings#remapping-dependencies).
### `SaysGm` contract
Under the `src/` directory, create a new file called `SaysGm.sol` with the following content:
```solidity
// SPDX-License-Identifier: BSD-3-Clause-Clear
pragma solidity ^0.8.13;
import {console2} from "forge-std/console2.sol";
import {CallbackConsumer} from "infernet-sdk/consumer/Callback.sol";
contract SaysGM is CallbackConsumer {
constructor(address coordinator) CallbackConsumer(coordinator) {}
function sayGM() public {
_requestCompute(
"hello-world",
bytes("Good morning!"),
20 gwei,
1_000_000,
1
);
}
function _receiveCompute(
uint32 subscriptionId,
uint32 interval,
uint16 redundancy,
address node,
bytes calldata input,
bytes calldata output,
bytes calldata proof
) internal override {
console2.log("\n\n"
"_____ _____ _______ _ _ _\n"
"| __ \\|_ _|__ __| | | | /\\ | |\n"
"| |__) | | | | | | | | | / \\ | |\n"
"| _ / | | | | | | | |/ /\\ \\ | |\n"
"| | \\ \\ _| |_ | | | |__| / ____ \\| |____\n"
"|_| \\_\\_____| |_| \\____/_/ \\_\\______|\n\n");
console2.log("subscription Id", subscriptionId);
console2.log("interval", interval);
console2.log("redundancy", redundancy);
console2.log("node", node);
console2.log("input:");
console2.logBytes(input);
console2.log("output:");
console2.logBytes(output);
console2.log("proof:");
console2.logBytes(proof);
}
}
```
All this contract does is request compute from our `hello-world` container via the `_requestCompute` function.
An Infernet node will pick up this subscription, execute the compute, and deliver the result to our contract via
the `_receiveCompute` function.
### Adding a Deploy Script
In the `scripts` directory, add a new file called `Deploy.s.sol`:
```solidity
// SPDX-License-Identifier: BSD-3-Clause-Clear
pragma solidity ^0.8.13;
import {Script, console2} from "forge-std/Script.sol";
import {SaysGM} from "../src/SaysGM.sol";
contract Deploy is Script {
function run() public {
// Setup wallet
uint256 deployerPrivateKey = vm.envUint("PRIVATE_KEY");
vm.startBroadcast(deployerPrivateKey);
// Log address
address deployerAddress = vm.addr(deployerPrivateKey);
console2.log("Loaded deployer: ", deployerAddress);
address coordinator = 0x5FbDB2315678afecb367f032d93F642f64180aa3;
// Create consumer
SaysGM saysGm = new SaysGM(coordinator);
console2.log("Deployed SaysHello: ", address(saysGm));
// Execute
vm.stopBroadcast();
vm.broadcast();
}
}
```
The coordinator address is the address of the Infernet coordinator. Our
Infernet Anvil Node already has `Coordinator` pre-deployed to that address.
### Adding a Call Script
Create another file under the `script` directory called `CallContract.s.sol`
```solidity
// SPDX-License-Identifier: BSD-3-Clause-Clear
pragma solidity ^0.8.0;
import {Script, console2} from "forge-std/Script.sol";
import {SaysGM} from "../src/SaysGM.sol";
contract CallContract is Script {
function run() public {
// Setup wallet
uint256 deployerPrivateKey = vm.envUint("PRIVATE_KEY");
vm.startBroadcast(deployerPrivateKey);
SaysGM saysGm = SaysGM(0x663F3ad617193148711d28f5334eE4Ed07016602);
saysGm.sayGM();
vm.stopBroadcast();
}
}
```
### Building the Project
Before building our project, we'll need to add this line to the `foundry.toml` file:
```
via_ir = true
```
So your `foundry.toml` file should look like [this](./foundry.toml). Otherwise the compiler will complain
about stack too deep errors.
Now, let's build our project.
```bash
forge build
```
The project should build successfully.
### Deploying the Contracts
**Deploy Infernet**
To deploy our contracts, and later be able to call and test them, we'll need to deploy infernet, as well as
our `hello-world` container! Refer to [the readme at the root of this project](../../README.md) for instructions on how
to do that.
After deploying an Infernet Node locally, we'll need to run the `Deploy` script.
```bash
PRIVATE_KEY=0x5de4111afa1a4b94908f83103eb1f1706367c2e68ca870fc3fb9a804cdab365a \
forge script script/Deploy.s.sol:Deploy --broadcast \
--rpc-url http://localhost:8545
```
The private key here is anvil's anvil's third default address which contains 10000 ETH.
### Calling the Contract
Similarly, to run our `CallContract.s.sol` script, we'll invoke it with `forge script`:
```bash
PRIVATE_KEY=0x5de4111afa1a4b94908f83103eb1f1706367c2e68ca870fc3fb9a804cdab365a \
forge script script/CallContract.s.sol:Deploy --broadcast \
--rpc-url http://localhost:8545
```
### Using a `Makefile`
To make running these commands easier, we can add them to a `Makefile`. This allows
us to run `make deploy` and `make call` instead of typing out the full command every time.
Refer to [this project's Makefile](./Makefile) for an example.
### 🎉 Done!
Congratulations! You've successfully created a contract that requests compute from
our `hello-world` container.

View File

@ -0,0 +1,7 @@
[profile.default]
src = "src"
out = "out"
libs = ["lib"]
via_ir = true
# See more config options https://github.com/foundry-rs/foundry/blob/master/crates/config/README.md#all-options

View File

@ -0,0 +1,2 @@
forge-std/=lib/forge-std/src
infernet-sdk/=lib/infernet-sdk/src

View File

@ -0,0 +1,19 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
pragma solidity ^0.8.0;
import {Script, console2} from "forge-std/Script.sol";
import {SaysGM} from "../src/SaysGM.sol";
contract CallContract is Script {
function run() public {
// Setup wallet
uint256 deployerPrivateKey = vm.envUint("PRIVATE_KEY");
vm.startBroadcast(deployerPrivateKey);
SaysGM saysGm = SaysGM(0x663F3ad617193148711d28f5334eE4Ed07016602);
saysGm.sayGM();
vm.stopBroadcast();
}
}

View File

@ -0,0 +1,26 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
pragma solidity ^0.8.13;
import {Script, console2} from "forge-std/Script.sol";
import {SaysGM} from "../src/SaysGM.sol";
contract Deploy is Script {
function run() public {
// Setup wallet
uint256 deployerPrivateKey = vm.envUint("PRIVATE_KEY");
vm.startBroadcast(deployerPrivateKey);
// Log address
address deployerAddress = vm.addr(deployerPrivateKey);
console2.log("Loaded deployer: ", deployerAddress);
address coordinator = 0x5FbDB2315678afecb367f032d93F642f64180aa3;
// Create consumer
SaysGM saysGm = new SaysGM(coordinator);
console2.log("Deployed SaysHello: ", address(saysGm));
// Execute
vm.stopBroadcast();
vm.broadcast();
}
}

View File

@ -0,0 +1,49 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
pragma solidity ^0.8.13;
import {console2} from "forge-std/console2.sol";
import {CallbackConsumer} from "infernet-sdk/consumer/Callback.sol";
contract SaysGM is CallbackConsumer {
constructor(address coordinator) CallbackConsumer(coordinator) {}
function sayGM() public {
_requestCompute(
"hello-world",
bytes("Good morning!"),
20 gwei,
1_000_000,
1
);
}
function _receiveCompute(
uint32 subscriptionId,
uint32 interval,
uint16 redundancy,
address node,
bytes calldata input,
bytes calldata output,
bytes calldata proof
) internal override {
console2.log("\n\n"
"_____ _____ _______ _ _ _\n"
"| __ \\|_ _|__ __| | | | /\\ | |\n"
"| |__) | | | | | | | | | / \\ | |\n"
"| _ / | | | | | | | |/ /\\ \\ | |\n"
"| | \\ \\ _| |_ | | | |__| / ____ \\| |____\n"
"|_| \\_\\_____| |_| \\____/_/ \\_\\______|\n\n");
console2.log("subscription Id", subscriptionId);
console2.log("interval", interval);
console2.log("redundancy", redundancy);
console2.log("node", node);
console2.log("input:");
console2.logBytes(input);
console2.log("output:");
console2.logBytes(output);
console2.log("proof:");
console2.logBytes(proof);
}
}