-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
I haven't looked into token-usage efficiency when creating my current implementation of the content generation for listicles. Since using more tokens costs more money, we should look into how to lessen it.
As of now there are three prompts passed to OpenAI that are somewhat long:
- Listicle Creation (ChatGPT)
- This is where we prompt using the fields in the web ui to generate the listicle information.
- JSON-ify Listicle (ChatGPT)
- Since the listicle provided could vary in formatting, this prompt is created where we pass along the above context + new (stating we want JSON) to get a JSON response that we can programmatically parse in our image processing step.
- Image Generation (DALL-E)
- At the end, we loop through the JSON and prompt DALL-E to generate portrait images based on each's description.