Update docs/llm/Effective-LLM-Prompting.md
This commit is contained in:
@@ -62,6 +62,37 @@ This guide is crafted to empower developers and enthusiasts in creating effectiv
|
||||
- ❌ "Explain machine learning."
|
||||
- ✅ "Write a comprehensive explanation of machine learning for a layman, including practical examples, without using jargon."
|
||||
|
||||
## 💡 Practical Application: Iterating on Prompts Based on LLM Responses
|
||||
|
||||
This section offers practical strategies for refining prompts based on the responses from Language Learning Models (LLMs), which is crucial for achieving the most accurate and relevant outputs.
|
||||
|
||||
### 🔄 Iterative Refinement Process
|
||||
- **Initial Evaluation**: Critically assess if the LLM's response aligns with the prompt's intent.
|
||||
- **Identify Discrepancies**: Locate areas where the response differs from the expected outcome.
|
||||
- **Adjust for Clarity**: Refine the prompt to clarify the expected response.
|
||||
- **Feedback Loop**: Use the LLM's output to iteratively adjust the prompt for better accuracy.
|
||||
|
||||
### 📋 Common Issues & Solutions
|
||||
- **Overly Broad Responses**: Specify the scope and depth required in the prompt.
|
||||
- **Under-Developed Answers**: Ask for explanations or examples to enrich the response.
|
||||
- **Misalignment with Intent**: Clearly state the purpose of the information being requested.
|
||||
- **Incorrect Assumptions**: Add information to the prompt to correct the LLM's assumptions.
|
||||
|
||||
### 🛠 Tools for Refinement
|
||||
- **Contrastive Examples**: Use 'do's and 'don'ts' to clarify task boundaries.
|
||||
- **Sample Outputs**: Provide examples of desired outputs.
|
||||
- **Contextual Hints**: Embed hints in the prompt to guide the LLM.
|
||||
|
||||
### 🎯 Precision in Prompting
|
||||
- **Granular Instructions**: Break down tasks into smaller steps.
|
||||
- **Explicit Constraints**: Define clear boundaries and limits for the task.
|
||||
|
||||
### 🔧 Adjusting Prompt Parameters
|
||||
- **Parameter Tuning**: Experiment with verbosity, style, or tone settings.
|
||||
- **Prompt Conditioning**: Prime the LLM with a series of related prompts before the main question.
|
||||
|
||||
Implementing these strategies can significantly improve the effectiveness of your prompts, leading to more accurate and relevant LLM outputs.
|
||||
|
||||
## 🔚 Conclusion
|
||||
This guide is designed to help refine your prompt crafting skills, enabling more effective and efficient use of LLMs for a range of applications.
|
||||
|
||||
@@ -116,3 +147,14 @@ This taxonomy provides a hierarchical framework for categorizing educational obj
|
||||
- **Creating**: Innovate and formulate new concepts or products.
|
||||
- Initiate and develop original creations or ideas that enhance or extend existing paradigms.
|
||||
|
||||
# Latent Content
|
||||
|
||||
This term refers to the reservoir of knowledge, facts, concepts, and information that is integrated within a model and requires activation through effective prompting.
|
||||
|
||||
- **Training Data**: Source of latent content derived exclusively from the data used during the model's training process.
|
||||
- **World Knowledge**: Broad facts and insights pertaining to global understanding.
|
||||
- **Scientific Information**: Detailed data encompassing scientific principles and theories.
|
||||
- **Cultural Knowledge**: Insights relating to various cultures and societal norms.
|
||||
- **Historical Knowledge**: Information on historical events and notable individuals.
|
||||
- **Languages**: The structural elements of language, including grammar, vocabulary, and syntax.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user