Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
singhsidhukuldeep 
posted an update Sep 19
Post
356
Good folks from VILA Lab at Mohamed bin Zayed University of AI have introduced 26 guiding principles for optimizing prompts when interacting with large language models (LLMs) like LLaMA and GPT.

These principles aim to enhance LLM response quality, accuracy, and task alignment across various scales of models.

1. Be direct and concise, avoiding unnecessary politeness.
2. Specify the intended audience.
3. Break complex tasks into simpler steps.
4. Use affirmative directives instead of negative language.
5. Request explanations in simple terms for clarity.
6. Mention a potential reward for better solutions.
7. Provide examples to guide responses.
8. Use consistent formatting and structure.
9. Clearly state tasks and requirements.
10. Mention potential penalties for incorrect responses.
11. Request natural, human-like answers.
12. Encourage step-by-step thinking.
13. Ask for unbiased responses without stereotypes.
14. Allow the model to ask clarifying questions.
15. Request explanations with self-tests.
16. Assign specific roles to the model.
17. Use delimiters to separate sections.
18. Repeat key words or phrases for emphasis.
19. Combine chain-of-thought with few-shot prompts.
20. Use output primers to guide responses.
21. Request detailed responses on specific topics.
22. Specify how to revise or improve text.
23. Provide instructions for generating multi-file code.
24. Give specific starting points for text generation.
25. Clearly state content requirements and guidelines.
26. Request responses similar to provided examples.

Results show significant improvements in both "boosting" (response quality enhancement) and "correctness" across different model scales. Using the ATLAS benchmark, specialized prompts improved response quality and accuracy by an average of 57.7% and 67.3%, respectively, when applied to GPT-4.