ritual/projects/gpt4/container
2024-03-29 10:50:13 -04:00
..
src feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00
.gitignore feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00
config.sample.json feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00
Dockerfile feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00
gpt4.env.sample feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00
Makefile feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00
README.md feat: publishing infernet-container-starter v0.2.0 2024-03-29 10:50:13 -04:00

GPT 4

In this example, we run a minimalist container that makes use of our closed-source model workflow: CSSInferenceWorkflow. Refer to src/app.py for the implementation of the quart application.

Requirements

To use the model you'll need to have an OpenAI api key. Get one at OpenAI's website.

Run the Container

make run

Test the Container

curl -X POST localhost:3000/service_output -H "Content-Type: application/json" \
  -d '{"source": 1, "data": {"text": "can shrimps actually fry rice?"}}'