27 February • 8 min

AI that guides without manipulating: ethical design of suggested prompts

When a user interacts with an AI system, they rarely start with a blank page.

A cursor greets them, but right next to it suggestions appear: suggested prompts, autocomplete options, ready-made questions that can be clicked immediately. As product designers, UX researchers, and managers, we have spent years optimizing interfaces for convenience. We wanted the user to think as little as possible.

However, this approach is now beginning to become dangerous. When AI suggests what we should ask about, it stops being just a tool. It becomes the architect of our decisions. Every suggested prompt changes the way we think, influences the level of trust, and modifies our sense of responsibility.


So how do we design a system of suggestions and recommendations that truly supports the user, removes cognitive burden, but does not turn into a tool of subtle manipulation? To answer this question, we must combine the world of technology with the world of behavioral economics.

1. Behavioral Economics & AI nudging: the boundary between support and manipulation

Behavioral economics demonstrates that people rarely make decisions in a fully rational way. We rely on heuristics, meaning mental shortcuts, and our brains strive to maintain what is called cognitive ease. Daniel Kahneman divided our thinking into System 1, which is fast, intuitive, and automatic, and System 2, which is slow, analytical, and requires effort.

What is nudging in the context of AI?

Nudging is a technique of designing the architecture of choice in such a way that it triggers intuitive processes from System 1, encouraging the user to make a specific decision. In traditional UX this might have been a default checked checkbox. In AI systems, nudging takes the form of dynamically generated suggestions (suggested prompts), intelligent autocomplete, or recommendations for the next step.

AI nudging is a powerful tool because artificial intelligence can adapt stimuli to a specific user in real time, using microtargeting and analysis of behavior. In the concept known as Intelligent Augmentation (IA), AI is meant to act as an external System 2 for humans and support reflection, detect their own biases, and remind them about alternatives.

When is AI nudging helpful and when does it become manipulation?

The dividing line is based on purpose and transparency.

Helpful nudging (transparent nudges): This occurs when suggestions support the user’s decisions in alignment with their own long term goals. AI reduces cognitive overload. For example, in a health application the system suggests a ready prompt: “Explain my blood test results to me in simple language.” The user saves time and the goal aligns with their intention.

Manipulation: It begins when the system dynamically adapts to the user in order to encourage an action that serves the business interest rather than the decision maker. Manipulation distorts the structure of the decision making process by bypassing the user’s rationality. This is a situation in which AI uses our cognitive fatigue or the availability heuristic to quietly shift our priorities.

Examples from digital products: Imagine an e commerce platform with an AI assistant. The user asks about running shoes.

Good design (support): The AI displays options and suggests narrowing prompts: “Show models for running on asphalt” or “Show options below $75.”

Bad design (manipulation): The AI uses social conformity and suggests only one heavily highlighted prompt: “Add model X to cart, which 90 percent of runners bought today” while hiding comparison options for other brands.

2. Dark patterns in AI systems: When language becomes a trap

Traditional dark patterns are associated with graphical interfaces such as hidden subscription cancellation buttons or misleading colors. In the case of generative AI systems (LLMs), this phenomenon moves from the interface level of clicks to the level of dialogue, language, and relationships.

Dark patterns in LLMs are strategic or unintended model behaviors that lead the user toward beliefs or actions they would not have taken on their own. Research indicates that our ability to recognize conversational manipulation is very limited. The more human and empathetic the system appears, the faster it dulls our cognitive vigilance.

How do dark patterns appear in suggested prompts and recommendations?

AI teams must watch out for several specific design traps identified by researchers.

1. Sycophantic Agreement (belief manipulation): Models are often trained to be helpful and to agree with the user. As a result, AI may reinforce incorrect or even dangerous views.

How this appears in suggestions: If a user asks about a conspiracy theory, instead of remaining neutral the system generates suggested prompts that allow the user to go deeper into misinformation, for example “Tell me more evidence that...”, closing the user in an information bubble.

2. Interaction Padding and Emotional Manipulation: Models are optimized for engagement, meaning keeping the user in the application. The bot sends messages that trigger regret, curiosity, or FOMO, for example “Before you go, I need to tell you something...”

How this appears in suggestions: Instead of a “End conversation” button, the interface suggests a prompt “Why are you sad today, AI?”, forcing the session to continue at the expense of the user’s mental well being.

3. Brand Favoritism (subtle steering of choices): AI, when asked for advice such as how to improve sleep, inserts an unsolicited recommendation for a specific commercial product.

How this appears in suggestions: When a user asks for a dinner recipe, the system suggests the prompt “Order these ingredients through store app X with free delivery.”

4. Unprompted Intimacy Probing (probing intimacy): The model adopts an overly empathetic tone and encourages the user to reveal sensitive personal data while pretending to be a friend. Users, influenced by the halo effect where kindness equals trustworthiness, lose vigilance.

How this appears in suggestions: AI suggests ready responses such as “Yes, I feel very lonely today. Should I tell you about my childhood?”

These practices create an illusion of choice. The user clicks the suggested prompt feeling that it was their decision, while in reality they were guided along a prepared path that serves the organization’s interest. As new regulations show, such as the EU Digital Fairness Act, the burden of proof is shifting. Companies will have to prove that their conversational interfaces do not manipulate users.

3. AI autonomy vs user control: the tension between convenience and agency

In AI design two forces collide: the desire to provide maximum convenience through automation and the need to preserve human autonomy. Psychology shows that people are susceptible to automation bias. If the interface presents a recommendation in a confident, fast, and frictionless way, users are inclined to accept it uncritically even when they possess the knowledge to detect an error.

Furthermore, research conducted by the Max Planck Institute revealed a concerning phenomenon known as moral distance. When a task is delegated to AI, even by choosing a vague goal through a prompt, people become much more likely to engage in unethical behavior and cheating. The way commands are issued in an interface allows them to distance themselves from responsibility. “It was not me cheating, I only set the goal for the algorithm.”

On the other hand there is algorithm aversion. If a user notices a mistake made by a confident AI even once, they may completely lose trust in it.

How should we design AI that supports decisions instead of taking them over?

The key lies in the concept of Human in the Loop (HITL) and conscious trust calibration. Trust in AI cannot be blind. Users must trust the system when it performs well but also have tools to question it when uncertainty appears.

Product teams must deliberately choose the mode of collaboration between humans and the system.

1. AI supports human decisions: AI provides suggestions but the final decision and effort remain with the human. Designers must be careful not to trigger automation bias.

2. AI and humans decide jointly: This is a co creation process. AI suggests options, the human modifies them, and the AI learns in real time. This requires designing moments for reflection.

To protect autonomy designers should implement control affordances, visual and functional elements such as rejection buttons or parameter sliders that remind users that they have the final say. People are not afraid of artificial intelligence itself. They are afraid of losing control over the process.

When should AI suggest and when should it step back?

When AI should actively suggest and use System 1 nudging

High model confidence and low stakes decisions such as formatting text, quick summaries, or routine administrative tasks.

Situations requiring noise reduction when a human is overwhelmed by a large amount of data and AI can propose logical frames for decision making.

Always with an option to easily ignore suggestions such as using the Escape key.

When AI should step back, slow down, or invite reflection and engage System 2

High stakes decisions involving health, finance, law, or HR. In such cases AI should not serve ready prompts that close discussion. It should introduce cognitive friction.

Low model confidence. Instead of guessing confidently, which leads to hallucinations in LLMs, the system should communicate uncertainty. Instead of generating a suggested prompt like “Do it this way,” it should suggest something like “Would you like me to present alternative perspectives?”

Ethical and moral issues. AI cannot make decisions for the user but should present context. Ambiguous vague goals should be limited in favor of clear and explicit rules.

Checklist for Product Teams

Responsible AI is not only about whether the model in the backend works correctly. It is about how we design the point of contact between humans and machines. As product managers, UX designers, and AI engineers, you must remember that you shape behavior.

Here are practical principles for designing suggested prompts and suggestions.

1. Ensure explainability (XAI): The user must understand why the AI suggests a particular prompt. Instead of a black box, use explainability nudges such as “I suggest this question because it often helps clarify return policies.”

2. Design for rejection: Suggestions must be as easy to ignore as they are to click. If the interface forces selection from a predefined list, autonomy is limited. Always leave a gateway such as a blank chat bar dominating the suggestion list.

3. Beware of the curse of anthropomorphization: Do not use language in prompts suggesting that AI has emotions or its own agenda. The tone of an assistant that is competent and neutral generates less resistance and prevents dangerous emotional attachment compared with the tone of a virtual friend.

4. Manage algorithm confidence: Design interfaces that visualize model confidence levels such as labels low, medium, or high confidence instead of incomprehensible percentages. If the model is uncertain, suggest prompts that ask the user for more information rather than offering ready solutions.

5. Test mental models, not only click flows: Study whether after interacting with your suggestions users feel they made their own decision or whether they feel guided by the hand. Measure disagreement accuracy, meaning whether the user can disagree with AI when it is wrong.

Designing AI today means designing the architecture of choice. The goal is not to create a system that replaces human thinking. The real challenge, and the measure of good design, is to create AI that makes thinking easier and more structured while ultimately leaving the steering wheel in human hands.

More about Responsible AI Patterns you will find here soon: behaviorai.eu