Orchestrator#
- class src.openCHA.orchestrator.orchestrator.Orchestrator(*, planner: BasePlanner = None, datapipe: DataPipe = None, promptist: Any = None, response_generator: BaseResponseGenerator = None, available_tasks: Dict[str, BaseTask] = {}, max_retries: int = 5, max_task_execute_retries: int = 3, max_planner_execute_retries: int = 16, max_final_answer_execute_retries: int = 3, role: int = 0, verbose: bool = False, planner_logger: Logger | None = None, tasks_logger: Logger | None = None, orchestrator_logger: Logger | None = None, final_answer_generator_logger: Logger | None = None, promptist_logger: Logger | None = None, error_logger: Logger | None = None, previous_actions: List[str] = [], current_actions: List[str] = [], runtime: Dict[str, bool] = {})[source]#
Description:
The Orchestrator class is the main execution heart of the CHA. All the components of the Orchestrator are initialized and executed here. The Orchestrator will start a new answering cycle by calling the run method. From there, the planning is started, then tasks will be executed one by one till the Task Planner decides that no more information is needed. Finally the Task Planner final answer will be routed to the Final Response Generator to generate an empathic final response that is returned to the user.
- execute_task(task_name: str, task_inputs: List[str]) Any [source]#
Execute the specified task based on the planner’s selected Action. This method executes a specific task based on the provided action. It takes an action as input and retrieves the corresponding task from the available tasks dictionary. It then executes the task with the given task input. If the task has an output_type, it stores the result in the datapipe and returns a message indicating the storage key. Otherwise, it returns the result directly.
- Parameters:
task_name (str) – The name of the Task.
List (task_inputs) – The list of the inputs for the task.
- Returns:
Result of the task execution. bool: If the task result should be directly returned to the user and stop planning.
- Return type:
str
- generate_final_answer(query, thinker, **kwargs) str [source]#
Generate the final answer using the response generator. This method generates the final answer based on the provided query and thinker. It calls the generate method of the response generator and returns the generated answer.
- Parameters:
query (str) – Input query.
thinker (str) – Thinking component.
- Returns:
Final generated answer.
- Return type:
str
- classmethod initialize(planner_llm: str = LLMType.OPENAI, planner_name: str = PlannerType.ZERO_SHOT_REACT_PLANNER, datapipe_name: str = DatapipeType.MEMORY, promptist_name: str = '', response_generator_llm: str = LLMType.OPENAI, response_generator_name: str = ResponseGeneratorType.BASE_GENERATOR, available_tasks: List[str] | None = None, previous_actions: List[Action] = None, verbose: bool = False, **kwargs) Orchestrator [source]#
This class method initializes the Orchestrator by setting up the planner, datapipe, promptist, response generator, and available tasks.
- Parameters:
planner_llm (str) – LLMType to be used as LLM for planner.
planner_name (str) – PlannerType to be used as task planner.
datapipe_name (str) – DatapipeType to be used as data pipe.
promptist_name (str) – Not implemented yet!
response_generator_llm (str) – LLMType to be used as LLM for response generator.
response_generator_name (str) – ResponseGeneratorType to be used as response generator.
available_tasks (List[str]) – List of available task using TaskType.
previous_actions (List[Action]) – List of previous actions.
verbose (bool) – Specifies if the debugging logs be printed or not.
**kwargs (Any) – Additional keyword arguments.
- Returns:
Initialized Orchestrator instance.
- Return type:
Example
from openCHA.datapipes import DatapipeType from openCHA.planners import PlannerType from openCHA.response_generators import ResponseGeneratorType from openCHA.tasks import TaskType from openCHA.llms import LLMType from openCHA.orchestrator import Orchestrator orchestrator = Orchestrator.initialize( planner_llm=LLMType.OPENAI, planner_name=PlannerType.ZERO_SHOT_REACT_PLANNER, datapipe_name=DatapipeType.MEMORY, promptist_name="", response_generator_llm=LLMType.OPENAI, response_generator_name=ResponseGeneratorType.BASE_GENERATOR, available_tasks=[TaskType.SERPAPI, TaskType.EXTRACT_TEXT], verbose=self.verbose, **kwargs )
- plan(query, history, meta, use_history, **kwargs) str [source]#
Plan actions based on the query, history, and previous actions using the selected planner type. This method generates a plan of actions based on the provided query, history, previous actions, and use_history flag. It calls the plan method of the planner and returns a list of actions or plan finishes.
- Parameters:
query (str) – Input query.
history (str) – History information.
meta (Any) – meta information.
use_history (bool) – Flag indicating whether to use history.
- Returns:
A python code block will be returnd to be executed by Task Executor.
- Return type:
str
- planner_generate_prompt(query) str [source]#
Generate a prompt from the query to make it more understandable for both planner and response generator. Not implemented yet.
- Parameters:
query (str) – Input query.
- Returns:
Generated prompt.
- Return type:
str
- process_meta() bool [source]#
This method processes the meta information and returns a boolean value. Currently, it always returns False.
- Returns:
False
- Return type:
bool
- run(query: str, meta: List[str] = None, history: str = '', use_history: bool = False, **kwargs: Any) str [source]#
This method runs the orchestrator by taking a query, meta information, history, and other optional keyword arguments as input. It initializes variables for tracking the execution, generates a prompt based on the query, and sets up a loop for executing actions. Within the loop, it plans actions, executes tasks, and updates the previous actions list. If a PlanFinish action is encountered, the loop breaks, and the final response is set. If any errors occur during execution, the loop retries a limited number of times before setting a final error response. Finally, it generates the final response using the prompt and thinker, and returns the final response along with the previous actions.
- Parameters:
query (str) – Input query.
meta (List[str]) – Meta information.
history (str) – History information.
use_history (bool) – Flag indicating whether to use history.
**kwargs (Any) – Additional keyword arguments.
- Returns:
The final response to shown to the user.
- Return type:
str