Overview
The utility of large language models is rapidly shifting from mere content generation to sophisticated intellectual labor. OpenAI’s latest guidance on brainstorming demonstrates that the AI is evolving into a structured thought partner, capable of organizing complex ideas and transforming vague directional goals into actionable plans. This capability moves the technology beyond being a simple idea generator; it becomes a mechanism for refining the thinking process itself.
Traditional brainstorming sessions often suffer from two core failures: either a scarcity of initial ideas or an overwhelming surplus of concepts that lack structural coherence. The current iteration of AI models addresses these bottlenecks by providing immediate expansion of option sets, imposing organizational frameworks, and facilitating early-stage pressure testing. It is not designed to replace human context or expert judgment, but rather to drastically accelerate and standardize the initial, messy stages of strategic development.
The key takeaway is that the prompt itself must be highly engineered. Simply asking ChatGPT for "ideas" yields generic results. Instead, the process requires defining the core decision, establishing strict operational constraints, and following a deliberate, multi-stage flow to maximize the quality of the output.
Engineering the Decision: Constraints and Context

Engineering the Decision: Constraints and Context
The most significant leap in using AI for strategic planning is the mandate to start with a clear decision, rather than a broad topic. When prompting, users must define the precise choice at hand—for instance, specifying whether the goal is selecting a campaign concept for the next six weeks or prioritizing a specific set of onboarding improvements. This specificity immediately makes the output purposeful and usable.
Furthermore, the model requires constraints to prevent the generation of purely theoretical, unfeasible concepts. Adding constraints is critical for grounding the output in reality. This includes defining the audience profile, the available timeline, the team's current capacity, and the specific channels that must be utilized. Even a concise limitation, such as "This must be executable by a team of three within four weeks," dramatically improves the feasibility and practical value of the generated ideas.
To build upon this, incorporating prior context is highly effective. By detailing what approaches have already been attempted, what worked, and what failed, the user prevents the AI from repeating known failures. This contextual layer allows the model to build upon existing organizational knowledge, ensuring the generated ideas are not just novel, but relevant to the specific operational history of the team.
The Wide-to-Narrow Methodology
Effective strategic thinking requires a deliberate methodology, and the AI is best utilized by adopting a "wide $\rightarrow$ narrow" flow. This pattern intentionally separates the idea generation phase from the evaluation phase, preventing premature judgment from stifling creativity.
The process begins by asking the model to generate a broad array of possible approaches, given the established constraints. At this stage, the goal is sheer volume and diversity of options. Once the initial set of ideas is generated, the process shifts to narrowing the focus. The user must then prompt the AI to group these disparate ideas into distinct, comparable themes. This step forces structure, allowing the comparison of options based on defined criteria.
The evaluation phase can be formalized by asking the model to perform specific analytical tasks. Instead of asking "Which is best?", the user can request a comparison of tradeoffs, asking the AI to identify what each option requires in terms of effort versus potential impact. Advanced techniques include asking the model to score each idea on a scale (e.g., 1–5) for impact and effort, or visualizing the options on a 2x2 matrix. These structured prompts force the AI to move from creative suggestion to analytical comparison.
Advanced Prompting for Critical Thinking
The most sophisticated use of the AI involves forcing it to adopt the role of a critical evaluator, not just a generator. This elevates the tool from a brainstorming assistant to a pseudo-consultant.
Instead of accepting the first recommendation, the user should prompt the model to explain its reasoning for any suggested option. This forces the AI to articulate its assumptions and the logical path it took, which is crucial for human review. Similarly, requesting a "friendly critique"—asking what one thing could make the plan stronger—forces the model to surface potential weaknesses or blind spots early in the process.
Furthermore, the model can be prompted to force a choice, even if multiple options were presented. Asking, "If we can only execute one of these, which should we pick and why?" simulates the high-stakes decision environment, providing a clear rationale and a single, actionable focus. By changing the format of the output—requesting a decision tree, a timeline, or a stakeholder map—the user forces the AI to organize the raw ideas into a specific, usable framework, moving the output from abstract concepts to concrete project plans.


