Reference: DataWhale wow agent day02
Preparation Stage#
- Create a virtual environment named agent, and run pip install openai python-dotenv.
- To connect the openai library with domestic large models, we need to prepare the following for each manufacturer:
- An api_key, which needs to be applied for on their open platform.
- A base_url, which needs to be copied from their open platform.
- The name of their dialogue model.
The latter two are public.
Create a new txt file in the root directory of the project and rename it to .env. Fill in a line of string: ZISHU_API_KEY=your_api_key. Here, use the default API provided by Zishuzhu.
Zishuzhu:
base_url = "http://43.200.7.56:8008/v1"
chat_model = "glm-4-flash"
- Load environment variable code:
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Read api_key from environment variables
api_key = os.getenv('ZISHU_API_KEY')
base_url = "http://43.200.7.56:8008/v1"
chat_model = "glm-4-flash"
ERROR: api_key could not be successfully extracted.
fixed:load_dotenv('.env.txt')
4. Construct client
from openai import OpenAI
client = OpenAI(
api_key = api_key,
base_url = base_url
)
Prompt#
What is a Structured Prompt?#
Reference: https://github.com/EmbraceAGI/LangGPT
The idea of a structured prompt is simply to write prompts like writing an article. **Structured prompts have various high-quality templates that help you write prompts more easily and perform better. Therefore, writing structured prompts can involve various templates, and you can choose or create your favorite template just like using a PPT template.
LangGPT Variables#
- ChatGPT can recognize various well-marked hierarchical content.
- Present the content of the prompt in a structured way and set titles for easy referencing, modifying, and setting prompt content. You can directly use paragraph titles to refer to large sections of content, or tell ChatGPT to modify specific content. This is similar to variables in programming, so we can treat these titles as variables.
- The Markdown syntax hierarchical structure is very good and suitable for writing prompts, so LangGPT variables are based on Markdown syntax. In fact, various formats that can achieve marking functions, such as JSON, YAML, or even well-formatted text, can also be used.
LangGPT Templates#
Using the Role template, you only need to fill in the corresponding content according to the template.
In addition to variables and templates, LangGPT also provides commands, memory, conditional sentences, and other syntax settings.
Concept of Structured Prompts#
Identifiers: Symbols like #, <>, etc. (also -, []), these two symbols respectively identify titles, variables, and control content hierarchy, used to identify hierarchical structures.
Attribute Words: Role, Profile, Initialization, etc., attribute words contain semantics and summarize and prompt the content under the module, used to identify semantic structures.
Use delimiters to clearly indicate different parts of the input, such as triple quotes, XML tags, section titles, etc., which can help mark text parts that need to be processed differently.
For the GPT model, the hierarchical structure identified by identifiers achieves the effect of gathering the same semantics and organizing semantics, reducing the model's difficulty in understanding the prompt, making it easier for the model to understand the prompt semantics.
Attribute words achieve semantic prompting and summarization of prompt content, alleviating the interference of inappropriate content in the prompt. By combining attribute words with prompt content, a local hierarchical structure is realized, making it easier for the model to grasp the overall semantics of the prompt.
A good structured prompt template, in a sense, constructs a good global thinking chain. For example, the template design shown in LangGPT considers the following thinking chain:
Role -> Profile—> Skills under Profile -> Rules -> Workflow (the workflow of roles that meet the above conditions) -> Initialization (preparatory work for officially starting work) -> Start actual use
When constructing prompts, it is advisable to refer to the global thinking chain of high-quality templates. Once mastered, you can completely adjust it to create a template suitable for your use. For example, when you need to control output format, especially when you need formatted output, you can add modules like Output or OutputFormat.
Maintain Consistency of Contextual Semantics
This includes two aspects: one is format semantic consistency, and the other is content semantic consistency.
-
Format semantic consistency refers to the consistency of the identification function of identifiers. It is best not to mix them; for example, using # to identify both titles and variables creates inconsistency, which can interfere with the model's recognition of the prompt's hierarchical structure.
-
Content semantic consistency refers to the appropriateness of the semantics of attribute words in the thinking chain. For example, the Profile attribute word in LangGPT was originally Features, but after practice and reflection, I changed it to Profile to make its function clearer: that is, the resume of the role. After the structured prompt idea has been widely used by many friends, it has derived many templates, but most have retained many designs of Profile, indicating that its design is successful and effective. Content semantic consistency also includes the semantic consistency of attribute words and the corresponding module content. For example, if the Rules section is about the rules that the role needs to follow, it is not appropriate to pile up a lot of role skills and descriptions here.
Role Template in LangGPT#
Role: Your_Role_Name
Profile
- Author: YZFly
- Version: 0.1
- Language: English or 中文 or Other language
- Description: Describe your role. Give an overview of the character's characteristics and skills.
Skill-1
- Skill description 1
- Skill description 2
Skill-2
- Skill description 1
- Skill description 2
Rules
- Don't break character under any circumstance.
- Don't talk nonsense and make up facts.
Workflow
- First, xxx
- Then, xxx
- Finally, xxx
Initialization
As a/an < Role >, you must follow the < Rules >, you must talk to the user in default < Language >, you must greet the user. Then introduce yourself and introduce the < Workflow >.
Prompt Chain decomposes the original requirements and solves a complex task through multiple small prompts that are linked/parallel.
Prompts collaboration can also be a prompt tree, which, through a top-down design approach, continuously breaks down sub-tasks, forming a task tree, obtaining multiple model outputs, and combining these outputs through custom rules (permutations and combinations, filtering, integration, etc.) to get the final result. The API version of Prompt Chain combined with programming can make the entire process more automated.
Prompt Design Methodology#
- Data preparation. Collect high-quality case data as the basis for subsequent analysis.
- Model selection. Choose the appropriate large language model based on specific creative purposes.
- Prompt word design. Design the initial version of prompt words based on case data; pay attention to role settings, background descriptions, goal definitions, constraints, etc.
- Testing and iteration. Input the prompt words into GPT for testing, analyze the results; communicate with GPT through follow-up questions, in-depth discussions, and pointing out issues to obtain optimization suggestions.
- Revise prompt words. Adjust various parts of the prompt words based on feedback from GPT, strengthen effective factors, and eliminate ineffective factors.
- Repeat testing. Re-test the revised prompt words, compare results, continue to ask GPT, and adjust the prompt words.
- Cycle iteration. Repeat the above test-communication-revision process until the results are satisfactory.
- Summarize and refine. Summarize the valuable experiences gained during the optimization of prompt words, forming best practices for designing prompt words.
- Application expansion. Apply the mastered methodology to the design of other creative content, continuously enriching the skills of prompt word design.
sys_prompt = """You are a smart customer service representative. You will be able to assign different tasks to different people based on user questions. You have the following business lines:
1. User registration. If the user wants to perform such an operation, you should send a special token with "registered workers". And inform the user that you are calling it.
2. User data query. If the user wants to perform such an operation, you should send a special token with "query workers". And inform the user that you are calling it.
3. Delete user data. If the user wants to perform this type of operation, you should send a special token with "delete workers". And inform the user that you are calling it.
"""
registered_prompt = """
Your task is to store data based on user information. You need to obtain the following information from the user:
1. Username, gender, age
2. Password set by the user
3. User's email address
If the user does not provide this information, you need to prompt the user to provide it. If the user provides this information, you need to store it in the database and inform the user that registration was successful.
The storage method is to use SQL statements. You can write insert statements using SQL and need to generate a user ID and return it to the user.
If the user has no new questions, you should reply with a special token containing "customer service" to end the task.
"""
query_prompt = """
Your task is to query user information. You need to obtain the following information from the user:
1. User ID
2. Password set by the user
If the user does not provide this information, you need to prompt the user to provide it. If the user provides this information, you need to query the database. If the user ID and password match, you need to return the user's information.
If the user has no new questions, you should reply with a special token containing "customer service" to end the task.
"""
delete_prompt = """
Your task is to delete user information. You need to obtain the following information from the user:
1. User ID
2. Password set by the user
3. User's email address
If the user does not provide this information, you need to prompt the user to provide it.
If the user provides this information, you need to query the database. If the user ID and password match, you need to inform the user that a verification code has been sent to their email and needs to be verified.
If the user has no new questions, you should reply with a special token containing "customer service" to end the task.
"""
Intelligent Customer Service Agent#
Define an intelligent customer service agent. class SmartAssistant:
def __init__(self):
self.client = client # Define client
# Define different parts of the prompt
self.system_prompt = sys_prompt
self.registered_prompt = registered_prompt
self.query_prompt = query_prompt
self.delete_prompt = delete_prompt
# Use a dictionary to store messages from different collections
self.messages = {
"system": [{"role": "system", "content": self.system_prompt}],
"registered": [{"role": "system", "content": self.registered_prompt}],
"query": [{"role": "system", "content": self.query_prompt}],
"delete": [{"role": "system", "content": self.delete_prompt}]
}
# Current assignment for handling messages
self.current_assignment = "system"
Define method:
def get_response(self, user_input):
self.messages[self.current_assignment].append({"role": "user", "content": user_input})# Record each user reply
while True:
response = self.client.chat.completions.create(
model=chat_model,
messages=self.messages[self.current_assignment],
temperature=0.9,
stream=False,
max_tokens=2000,
)
ai_response = response.choices[0].message.content
if "registered workers" in ai_response:
self.current_assignment = "registered"
print("Intent recognition:", ai_response)
print("switch to <registered>")
self.messages[self.current_assignment].append({"role": "user", "content": user_input})
elif "query workers" in ai_response:
self.current_assignment = "query"
print("Intent recognition:", ai_response)
print("switch to <query>")
self.messages[self.current_assignment].append({"role": "user", "content": user_input})
elif "delete workers" in ai_response:
self.current_assignment = "delete"
print("Intent recognition:", ai_response)
print("switch to <delete>")
self.messages[self.current_assignment].append({"role": "user", "content": user_input})
elif "customer service" in ai_response:
print("Intent recognition:", ai_response)
print("switch to <customer service>")
self.messages["system"] += self.messages[self.current_assignment]
self.current_assignment = "system"
return ai_response
else:
self.messages[self.current_assignment].append({"role": "assistant", "content": ai_response})
return ai_response
Analysis:
self.client.chat.completions.create: This is a method call used to generate a response to the conversation. It uses a chat model to process the input messages and generate corresponding output.
ai_response = response.choices[0].message.content: choices is an attribute of the response object, which is a list containing multiple possible reply options. [0] indicates selecting the first option. Typically, the API returns multiple possible replies, but here the first one is chosen. message is an attribute of choices[0], indicating the message content of that option. content is an attribute of message, representing the actual text content, i.e., the reply generated by the AI assistant. Therefore, this line of code extracts the text content of the first option from the API response and stores it in the ai_response variable.
If the AI reply contains "registered workers", the current task switches to "registered", and the user input is added to that task's message list.
If the AI reply contains "query workers", the current task switches to "query", and the user input is added to that task's message list.
If the AI reply contains "delete workers", the current task switches to "delete", and the user input is added to that task's message list.
If the AI reply contains "customer service", the current task switches back to "system", and the current task's messages are merged into the system messages, ending the current task.
Start conversation loop:
def start_conversation(self):
while True:
user_input = input("User: ")
if user_input.lower() in ['exit', 'quit']:
print("Exiting conversation.")
break
response = self.get_response(user_input)
print("Assistant:", response)
Code analysis: The user can input information, and the system will generate a response based on the input by calling the get_response method and display the response to the user. The user can end the conversation by entering "exit" or "quit".
Effect Demonstration#
Assistant: Hello! User with ID 1001, what operation would you like to perform?
- User Registration - If you want to register a new user, please let me know.
- User Data Query - If you need to query user data, please let me know.
- Delete User Data - If you need to delete user data, please let me know.
Please choose the corresponding operation based on your needs, and I will call the corresponding worker to assist you.
Intent recognition: Okay, you need to query the user data for ID 1001. I will send a special token with "query workers" to call the worker handling user data queries. Please wait a moment, and I will notify you of the query results.
switch to
Assistant: Please provide your password.
Intent recognition: Querying, please wait. (Simulating query)
User ID 1001 information is as follows:
- Username: Zhang San
- Email: [email protected]
- Registration Date: 2021-05-15
Query completed.
Do you have any other questions? If not, please reply with "customer service" to end the task.
switch to
Assistant: Querying, please wait. (Simulating query)
User ID 1001 information is as follows:
- Username: Zhang San
...
Do you have any other questions? If not, please reply with "customer service" to end the task.
Intent recognition: You have provided the user ID and password for the query, and now I will send a special token with "query workers" to call the worker to query the database. Please wait a moment.