PromptingTools

Documentation for PromptingTools.

PromptingTools.ChatMLSchemaType

ChatMLSchema is used by many open-source chatbots, by OpenAI models under the hood and by several models and inferfaces (eg, Ollama, vLLM)

It uses the following conversation structure:

<im_start>system
...<im_end>
<|im_start|>user
...<|im_end|>
<|im_start|>assistant
...<|im_end|>
source
PromptingTools.OpenAISchemaType

OpenAISchema is the default schema for OpenAI models.

It uses the following conversation template:

[Dict(role="system",content="..."),Dict(role="user",content="..."),Dict(role="assistant",content="...")]

It's recommended to separate sections in your prompt with markdown headers (e.g. `##Answer

`).

source
PromptingTools.aiclassifyMethod
aiclassify(prompt_schema::AbstractOpenAISchema, prompt;
api_kwargs::NamedTuple = (logit_bias = Dict(837 => 100, 905 => 100, 9987 => 100),
    max_tokens = 1, temperature = 0),
kwargs...)

Classifies the given prompt/statement as true/false/unknown.

Note: this is a very simple classifier, it is not meant to be used in production. Credit goes to: https://twitter.com/AAAzzam/status/1669753721574633473

It uses Logit bias trick to force the model to output only true/false/unknown.

Output tokens used (via api_kwargs):

  • 837: ' true'
  • 905: ' false'
  • 9987: ' unknown'

Arguments

  • prompt_schema::AbstractOpenAISchema: The schema for the prompt.
  • prompt: The prompt/statement to classify if it's a String. If it's a Symbol, it is expanded as a template via render(schema,template).

Example

aiclassify("Is two plus two four?") # true
aiclassify("Is two plus three a vegetable on Mars?") # false

aiclassify returns only true/false/unknown. It's easy to get the proper Bool output type out with tryparse, eg,

tryparse(Bool, aiclassify("Is two plus two four?")) isa Bool # true

Output of type Nothing marks that the model couldn't classify the statement as true/false.

Ideally, we would like to re-use some helpful system prompt to get more accurate responses. For this reason we have templates, eg, :IsStatementTrue. By specifying the template, we can provide our statement as the expected variable (statement in this case) See that the model now correctly classifies the statement as "unknown".

aiclassify(:IsStatementTrue; statement = "Is two plus three a vegetable on Mars?") # unknown

For better results, use higher quality models like gpt4, eg,

aiclassify(:IsStatementTrue;
    statement = "If I had two apples and I got three more, I have five apples now.",
    model = "gpt4") # true
source
PromptingTools.aiembedMethod
aiembed(prompt_schema::AbstractOpenAISchema,
        doc_or_docs::Union{AbstractString, Vector{<:AbstractString}},
        postprocess::F = identity;
        verbose::Bool = true,
        api_key::String = API_KEY,
        model::String = MODEL_EMBEDDING,
        http_kwargs::NamedTuple = (retry_non_idempotent = true,
                                   retries = 5,
                                   readtimeout = 120),
        api_kwargs::NamedTuple = NamedTuple(),
        kwargs...) where {F <: Function}

The aiembed function generates embeddings for the given input using a specified model and returns a message object containing the embeddings, status, token count, and elapsed time.

Arguments

  • prompt_schema::AbstractOpenAISchema: The schema for the prompt.
  • doc_or_docs::Union{AbstractString, Vector{<:AbstractString}}: The document or list of documents to generate embeddings for.
  • postprocess::F: The post-processing function to apply to each embedding. Defaults to the identity function.
  • verbose::Bool: A flag indicating whether to print verbose information. Defaults to true.
  • api_key::String: The API key to use for the OpenAI API. Defaults to API_KEY.
  • model::String: The model to use for generating embeddings. Defaults to MODEL_EMBEDDING.
  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to (retry_non_idempotent = true, retries = 5, readtimeout = 120).
  • api_kwargs::NamedTuple: Additional keyword arguments for the OpenAI API. Defaults to an empty NamedTuple.
  • kwargs...: Additional keyword arguments.

Returns

  • msg: A DataMessage object containing the embeddings, status, token count, and elapsed time.

Example

msg = aiembed("Hello World")
msg.content # 1536-element JSON3.Array{Float64...

We can embed multiple strings at once and they will be hcat into a matrix (ie, each column corresponds to one string)

msg = aiembed(["Hello World", "How are you?"])
msg.content # 1536×2 Matrix{Float64}:

If you plan to calculate the cosine distance between embeddings, you can normalize them first:

using LinearAlgebra
msg = aiembed(["embed me", "and me too"], LinearAlgebra.normalize)

# calculate cosine distance between the two normalized embeddings as a simple dot product
msg.content' * msg.content[:, 1] # [1.0, 0.787]
source
PromptingTools.aigenerateMethod
aigenerate([prompt_schema::AbstractOpenAISchema,] prompt; verbose::Bool = true,
    model::String = MODEL_CHAT,
    http_kwargs::NamedTuple = (;
        retry_non_idempotent = true,
        retries = 5,
        readtimeout = 120), api_kwargs::NamedTuple = NamedTuple(),
    kwargs...)

Generate an AI response based on a given prompt using the OpenAI API.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)
  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate
  • verbose: A boolean indicating whether to print additional information.
  • prompt_schema: An abstract schema for the prompt.
  • api_key: A string representing the API key for accessing the OpenAI API.
  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.
  • http_kwargs: A named tuple of HTTP keyword arguments.
  • api_kwargs: A named tuple of API keyword arguments.
  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

See also: ai_str

Example

Simple hello world to test the API:

result = aigenerate("Say Hi!")
# [ Info: Tokens: 29 @ Cost: $0.0 in 1.0 seconds
# AIMessage("Hello! How can I assist you today?")

result is an AIMessage object. Access the generated string via content property:

typeof(result) # AIMessage{SubString{String}}
propertynames(result) # (:content, :status, :tokens, :elapsed
result.content # "Hello! How can I assist you today?"

___ You can use string interpolation:

a = 1
msg=aigenerate("What is `$a+$a`?")
msg.content # "The sum of `1+1` is `2`."

___ You can provide the whole conversation or more intricate prompts as a Vector{AbstractMessage}:

conversation = [
    SystemMessage("You're master Yoda from Star Wars trying to help the user become a Yedi."),
    UserMessage("I have feelings for my iPhone. What should I do?")]
msg=aigenerate(conversation)
# AIMessage("Ah, strong feelings you have for your iPhone. A Jedi's path, this is not... <continues>")
source
PromptingTools.renderMethod

Builds a history of the conversation to provide the prompt to the API. All kwargs are passed as replacements such that {{key}}=>value in the template.}}

source
PromptingTools.@aai_strMacro
aai"user_prompt"[model_alias] -> AIMessage

Asynchronous version of @ai_str macro, which will log the result once it's ready.

Example

Send asynchronous request to GPT-4, so we don't have to wait for the response: Very practical with slow models, so you can keep working in the meantime.

```julia m = aai"Say Hi!"gpt4;

...with some delay...

[ Info: Tokens: 29 @ Cost: 0.0011 in 2.7 seconds

[ Info: AIMessage> Hello! How can I assist you today?

source
PromptingTools.@ai_strMacro
ai"user_prompt"[model_alias] -> AIMessage

The ai"" string macro generates an AI response to a given prompt by using aigenerate under the hood.

Arguments

  • user_prompt (String): The input prompt for the AI model.
  • model_alias (optional, any): Provide model alias of the AI model (see MODEL_ALIASES).

Returns

AIMessage corresponding to the input prompt.

Example

result = ai"Hello, how are you?"
# AIMessage("Hello! I'm an AI assistant, so I don't have feelings, but I'm here to help you. How can I assist you today?")

If you want to interpolate some variables or additional context, simply use string interpolation:

a=1
result = ai"What is `$a+$a`?"
# AIMessage("The sum of `1+1` is `2`.")

If you want to use a different model, eg, GPT-4, you can provide its alias as a flag:

result = ai"What is `1.23 * 100 + 1`?"gpt4
# AIMessage("The answer is 124.")
source