Skip to content

Investigate making prompts more efficient (token-usage) #11

@inFocus7

Description

@inFocus7

I haven't looked into token-usage efficiency when creating my current implementation of the content generation for listicles. Since using more tokens costs more money, we should look into how to lessen it.

As of now there are three prompts passed to OpenAI that are somewhat long:

  1. Listicle Creation (ChatGPT)
    • This is where we prompt using the fields in the web ui to generate the listicle information.
  2. JSON-ify Listicle (ChatGPT)
    • Since the listicle provided could vary in formatting, this prompt is created where we pass along the above context + new (stating we want JSON) to get a JSON response that we can programmatically parse in our image processing step.
  3. Image Generation (DALL-E)
    • At the end, we loop through the JSON and prompt DALL-E to generate portrait images based on each's description.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions