Response generators#

class src.openCHA.response_generators.response_generator.BaseResponseGenerator(*, llm_model: BaseLLM = None, prefix: str = '', summarize_prompt: bool = True, max_tokens_allowed: int = 10000)[source]#

Description:

Base class for a response generator, providing a foundation for generating responses using a language model.

divide_text_into_chunks(input_text: str = '', max_tokens: int = 10000) List[str][source]#

Generate a response based on the input prefix, query, and thinker (task planner).

Parameters:
  • input_text (str) – the input text (e.g., prompt).

  • max_tokens (int) – Maximum number of tokens allowed.

Returns:

List of string variables

Return type:

chunks(List)

generate(prefix: str = '', query: str = '', thinker: str = '', **kwargs: Any) str[source]#

Generate a response based on the input prefix, query, and thinker (task planner).

Parameters:
  • prefix (str) – Prefix to be added to the response.

  • query (str) – User’s input query.

  • thinker (str) – Thinker’s (Task Planner) generated answer.

  • **kwargs (Any) – Additional keyword arguments.

Returns:

Generated response.

Return type:

str

Example

from openCHA.llms import LLMType
from openCHA.response_generators import ResponseGeneratorType
response_generator = initialize_planner(llm=LLMType.OPENAI, response_generator=ResponseGeneratorType.BASE_GENERATOR)
response_generator.generate(query="How can I improve my sleep?", thinker="Based on data found on the internet there are several ...")