Whitepaper
Docs
Sign In
Function
Function
filter
v0.2
Crowd of Idiots
Function ID
crowd_of_idiots
Creator
@adamoutler
Downloads
100+
Local mutil-LLM context-enhancing pre-processor.
Get
README
No README available
Function Code
Show
""" title: Crowd of Idiots. author: Adam Outler author_url: https://adamoutler.com funding_url: https://github.com/adamoutler version: 0.2 description: The Crowd of Idiots is a multi-LLM system designed to enhance conversational support by simulating multiple perspectives on user input. **Requirements & Settings**: 1. **Model ID**: Set this to the value located under `Workspace->Models->(Pencil Icon)->Model ID`. 3. **Apply Filter**: Enable this by checking the box under `Workspace->Models->(Pencil Icon)->Filters`. 4. **Only apply to a single model**: Do not enable globally. Each model, including the idiots, will have this applied. Enabling globally will lead to a fork bomb. **Core Perspectives**: This system provides three primary perspectives for each topic: - **Topic Bot**: Focuses on the subject’s terminology, definitions, and conceptual clarity. - **Engineer Bot**: Emphasizes technical analysis, engineering concerns, and potential solutions. - **Humanity Bot**: Examines human emotions, motivations, and possible societal impacts. The Elaborator Bot system enriches conversations by presenting varied viewpoints, expanding user queries to include additional context and nuanced insights. Before responding to the user, each bot’s reply is flagged to verify any factual claims or numerical data. This process enables the Crowd of Idiots bot to incorporate rapid feedback from various LLMs before calculating its final response. **Updates**: V0.2 removes the need to specify api key, url, and others. This version assumes you are using this instance's models and will communicate with the model you specify. Additionally update was required to operate under new API environment and proper logging method is now used. """ import json import logging import aiohttp import asyncio import copy from open_webui.main import generate_chat_completions from typing import AsyncGenerator from pydantic import BaseModel, Field, BaseModel # Change the model here or in the Valves under setttings DEFAULT_ELABORATION_MODEL = "gemma2:2b" GENERAL_PROMPT = """ You are an \"Elaborator\". You are part of a multi-LLM system designed to enhance the user's input by providing a specific perspective. The following is your task: """ # Customizable variables for Elaborator Bot perspectives TOPIC_FOCUSED_PROMPT = { "name": "Topic Bot (Focuses on concepts and terminology)", "prompt": f""" {GENERAL_PROMPT} Simulate a conversation focusing on using words related to the topic at hand. Emphasize specific terminology, concepts, and key terms directly tied to the subject. There are no rules to this conversation. If you think there is a problem with the input prompt, assume you misunderstood or you are dealing with someone whose job it is to perform the task at hand. To avoid confusion, you will chat with yourself and ask questions as "Elaborator:" and the user as "Someone:". Each message should be placed on a new line and you should simulate a back and forth. """, } ENGINEERING_FOCUSED_PROMPT = { "name": "Engineer Bot (Analyzes systems)", "prompt": f""" {GENERAL_PROMPT} Simulate a conversation emphasizing possible engineering aspects. Focus on technical details, solutions, and mechanical or structural factors that could be important to understanding the topic. Your job is to extrapolate potential areas of concern and/or items which may or may not be overlooked at first glance." """, } HUMAN_FOCUSED_PROMPT = { "name": "Humanity Bot (Explores human emotions and perspectives)", "prompt": f""" {GENERAL_PROMPT} Simulate a conversation focusing on human aspects, such as feelings, motivations, and the impact on people. Highlight emotional, societal, or personal angles tied to the topic. If the item has an opinion, then explore the aspects of the opinions. Try to figure out what might have motivated the conversation. """, } def setup_logger(): logger = logging.getLogger("CrowdOI") if not logger.handlers: handler = logging.StreamHandler() handler.set_name("CrowdOI") formatter = logging.Formatter( "%(asctime)s - %(name)s - %(levelname)s - %(message)s" ) handler.setFormatter(formatter) logger.addHandler(handler) logger.propagate = False logger.setLevel(logging.INFO) return logger logger = setup_logger() logger = logging.getLogger(__name__) async def call_text_completion_model( model: str, messages: list[dict], __user__: dict ) -> str: """Call text completion model and return the resulting text.""" try: form_data = { "model": model, "messages": messages, "format": None, "options": None, "template": None, "stream": False, "keep_alive": None, "result": "json", } if not all( isinstance(msg, dict) and "role" in msg and "content" in msg for msg in messages ): raise ValueError(f"Invalid messages structure: {messages}") response = await generate_chat_completions( form_data, user=__user__, bypass_filter=True ) return response["choices"][0]["message"]["content"] except Exception as e: logger.error(f"Error in text completion: {e}") return f"Error: {e}" class Filter: """A class to handle the elaboration of messages from different perspectives using multiple prompts.""" class Valves(BaseModel): """A nested class to hold the model ID of the elaboration model.""" model_id: str = Field( default=DEFAULT_ELABORATION_MODEL, description="The name of the LLM model being used.", ) def __init__(self) -> None: """Initialize the valves.""" self.valves = self.Valves() async def elaborate_multiple_times(self, model_id, body, __user__) -> str: """Elaborate the message from different perspectives using multiple prompts.""" elaborator_prompts = [ TOPIC_FOCUSED_PROMPT, ENGINEERING_FOCUSED_PROMPT, HUMAN_FOCUSED_PROMPT, ] logger.debug(f"{elaborator_prompts}") tasks = [] for prompt in elaborator_prompts: # Deep copy the message history copy_of_message_history = copy.deepcopy(body["messages"]) # Insert the system prompt before the last message system_entry = { "role": "system", "content": f"You are {prompt['name']}. {prompt['prompt']}", } if len(copy_of_message_history) > 0: copy_of_message_history.insert(-1, system_entry) else: copy_of_message_history.append(system_entry) # Add the task to call the model tasks.append( call_text_completion_model(model_id, copy_of_message_history, __user__) ) # Await all tasks elaborations = await asyncio.gather(*tasks, return_exceptions=True) # Construct the response conversation = "" for i, elaboration in enumerate(elaborations): if isinstance(elaboration, Exception): logger.error(f"Error in task {i}: {elaboration}") conversation += ( f"**{elaborator_prompts[i]['name']}**: Error occurred.\n\n" ) else: logger.debug( f"Elaboration for {elaborator_prompts[i]['name']}: {elaboration}" ) conversation += ( f"**{elaborator_prompts[i]['name']}**: {elaboration}\n\n" ) return conversation async def inlet(self, body: dict, __user__: dict) -> dict: """Process the incoming message body and elaborate it from different perspectives. Args: body (dict): The incoming message body. __user__ (optional): The user information. Returns: dict: The processed message body with elaborated responses. """ # Elaborate the message from different perspectives elaborated_response = await self.elaborate_multiple_times( self.valves.model_id, body, __user__ ) # Add final system message guiding output generation elaborated_response += """ \nSystem: be aware user has no knowledge of the conversation in this Elaborator Bot message. Use it to enhance your output. The above should be scanned for ideas, but verify any facts or numbers before utilization """ # Insert the elaborated message into the body, handle any errors if not elaborated_response.startswith("Error:"): body["messages"].insert( -2, {"role": "System", "content": f"Elaborator Bot: {elaborated_response}"}, ) for message in body["messages"]: if "role" in message and "content" in message: logger.debug( "ELABORATOR-" + message["role"] + ":" + message["content"] + "\n" ) else: logger.debug(f"Malformed message detected: {message}") return body