Cloud 101CircleEventsBlog
Join AT&T Cybersecurity in Chicago to learn top 2024 resilience tactics on May 21st!

The Secret to Supercharging LLMs: It's Not Answers, It's Questions

The Secret to Supercharging LLMs: It's Not Answers, It's Questions

Blog Article Published: 04/10/2024

Written by Dr. Chantal Spleiss, Co-Chair of the CSA AI Governance & Compliance Working Group.

Stop talking to your AI, start collaborating with it. Prompt engineering is the key to unlocking the full potential of LLMs. This mastery of questioning is so valuable that a prompt engineer may earn as much as 120k USD a year – not for providing answers, but for asking the right questions! Want more efficient AI interactions? Here's a powerful yet overlooked technique: customize ChatGTP (for example). This transforms your LLM into a collaborator, refining your questions on the fly for the best possible answers.


Customization Tips

Become your own prompt engineer! Select settings to customize ChatGTP and tell it how you want it to respond. Instruct ChatGTP to check your prompt and optimize it to produce the most meaningful and correct answer. Configure it to stop and ask for any additional information required - if needed for clarification, focus, and to prevent misunderstandings. Last but not least, you can request a “prompt check performed” to give you confidence that it's considered your intent and is processing the best possible prompt. Want to go even deeper? Try meta-prompting.


Expand your possibilities with meta-prompting

What is meta-prompting? Think of it like a set of instructions for instructions. You provide a meta-prompt that outlines how ChatGPT should interpret and respond to your main prompts.

Meta-prompts empower you to fine-tune aspects like:

  • Focus.
    • Example 1: Prioritize accuracy over creativity.
    • Example 2: Prioritize creativity, and even hallucinations are welcome.
  • Tone.
    • Example 1: Answer in a friendly and humorous tone, but don’t use emojis.
    • Example 2: Answer in a professional, very polite tone.
  • Reasoning Process.
    • Example 1: Outline your thought process step-by-step before giving an answer.
    • Example 2: Ensure your answer follows sound logical principles. Identify any assumptions made, and state them clearly.

Meta-prompting lets you tailor an LLM’s approach on how to deal with your prompts. Please note that if you configure ChatGTP, for example with meta-prompts, it will apply them to all your prompts. With this tool, you're not just using ChatGPT, you're shaping how it thinks. That's the power of becoming your own prompt engineer!


Further prompt refinements

But prompt optimization goes beyond settings and meta-prompting. Research suggests interesting little things can affect AI responses. For example, including polite words like "please" might lead to more helpful outputs. Quirky instructions, like asking the LLM to "take a deep breath," are also debated. Some worry they distract it, while others believe they spark creativity, potentially even improving accuracy. Be creative and experiment with your own refinements: explore in which context subtle tweaks do make a difference…!

The customization techniques and subtle refinements we've checked out are powerful. But the real magic starts when we use them to craft questions that unlock new possibilities and deeper understanding.


The Art of Questioning

Prompt engineering brings an age-old truth into the spotlight: the quality of our questions determines the quality of answers. Questions are like keys: only the right ones open doors to new knowledge. To get the most out of an LLM, we need to become masters of asking! Ready to get better answers?

"We must remember that what we observe is not nature itself, but nature exposed to our method of questioning." ~ Werner Heisenberg

Sources: Prompt Politeness, Principled Instructions, Prompting Strategies, Brain & Breath


Figure 1: Meta-Prompts for Writing Blogs

Share this content on your favorite social network today!