ritual/projects/gpt4/container
2024-06-06 13:18:48 -04:00
..
src infernet-1.0.0 update 2024-06-06 13:18:48 -04:00
.gitignore feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00
config.sample.json infernet-1.0.0 update 2024-06-06 13:18:48 -04:00
Dockerfile infernet-1.0.0 update 2024-06-06 13:18:48 -04:00
gpt4.env.sample feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00
Makefile infernet-1.0.0 update 2024-06-06 13:18:48 -04:00
README.md infernet-1.0.0 update 2024-06-06 13:18:48 -04:00

GPT 4

In this example, we run a minimalist container that makes use of our closed-source model workflow: CSSInferenceWorkflow. Refer to src/app.py for the implementation of the quart application.

Requirements

To use the model you'll need to have an OpenAI api key. Get one at OpenAI's website.

Run the Container

make run

Test the Container

curl -X POST localhost:3000/service_output -H "Content-Type: application/json" \
  -d '{"source": 1, "data": {"prompt": "can shrimps actually fry rice?"}}'