A project manager drafts a vague prompt for an LLM to analyze customer feedback and generate insights.
While the response seems generic, some key details appear unrelated to the feedback provided.
What could explain this behavior, and how should the manager proceed?
- The LLM failed to generate accurate insights because it requires external validation from another AI model before producing relevant outputs, suggesting the manager should integrate multiple AI systems for better results.
- The LLM response depends entirely on its learned patterns and token analysis, and adding more detailed customer feedback won't significantly alter the result.
- The lack of specificity in the prompt caused the LLM to fall back on general knowledge or assumptions, requiring the manager to refine the prompt by including precise context or related references for accuracy.
- The LLM breaks the input into tokens but misinterprets individual words, meaning the issue arises from tokenization granularity rather than the structure of the prompt itself.
Answer(s): C
Explanation:
Because the prompt was vague, the LLM defaulted to generic outputs and assumptions. To fix this, the manager should refine the prompt with precise context and relevant details, ensuring the model generates accurate, feedback-based insights.
Reveal Solution Next Question