Skip to content

Add summarisation #2

@antoni-devlin

Description

@antoni-devlin

Using langchain, I should be able to prompt Ollama's openhermes (or another, faster, model?) to summarise the transcript created by Whisper.

I think these are the relevant Langchain docs.

As long as the Ollama server is running, basic prompting can be achieved by:

from langchain_community.llms import Ollama

llm = Ollama(model="llama2")
llm("<Prompt here>")

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions