Promptly · Workflow Modes
Promptly — Prompt Optimizer Studio
Visualization-first workflow. See every gain, every cost, every version.

How would you like to start?

Pick a workflow mode. Switch anytime.

Layer 1 · Magic Mode

Describe your goal

Promptly turns a single sentence into a multi-step optimized prompt. Tell us what you want and we'll handle the rest.

Context or Source Text (Optional) This helps Promptly understand existing material without forcing you to share it.

Pipeline Working Status

Selected via Outcome Runner (best-of-N evaluation)
1
Spec Builder Structure
2
Question Engine Clarify
3
LLM Agents Draft
4
Metrics & Scoring Score
5
Outcome Runner Select
Goal: Analyzing...
Constraints: ...
Tone: ...
?
Q1: Waiting...
Q2: ...
Q3: ...
Architect → ...
Editor → ...
Judge → ...
Clarity 0%
Style 0%
Safety 0%
Cost 0
Candidate 1 Candidate 2 Candidate 3 Top Pick
Promptly’s pipeline will show you how your best prompt is chosen.

If the output looks the same as the input, the backend will automatically try a more aggressive rewrite.

Best Prompt Output

We run Spec → Questions → LLM agents → Metrics → Outcome Runner before surfacing this prompt.

Fine-tune the structural instructions and constraints that guide the agents behind the magic.

Instruction blueprint (Optional) Describe how the agents should approach the prompt (tone, flow, guardrails).
Demonstration examples (Optional) Share one input/output pair so agents know desired output style.
Constraints & style (Optional) List formatting rules, length, safety limits, or tone you care about.

Only open this when you want dataset-style control over samples, schema, and optimization knobs.

POS / NEG dataset snippets (Optional) List a few positive/negative examples to bias scoring.
Schema or output template (Optional) Define the structure we should respect in the final prompt.
Optimization knobs (Optional) Control candidate count, evaluation tests, or sampling behavior.
Accuracy Percentage of test cases that passed with correct outputs +0.00%
F1 Balance between precision and recall (higher is better)  
Pass Rate Ratio of successful runs to total attempts  
Estimated Cost Average usage per request (estimate)  
Progress % Completion percentage towards target accuracy  

Growth Over Iterations

Change Contribution

Pass vs Fail (%)

Progress Meter (%)

✨ Why Promptly beats a plain AI model

We run a structured pipeline—spec building, question refining, multi-agent orchestration, scoring, and Outcome Runner selection—so you're never just sending a single prompt to an LLM.

1

Spec Builder

Turn your raw idea into a structured spec so every requirement is explicit before we touch an LLM.

2

Question Engine

We ask targeted follow-up questions (Q1–Q3 flow) to remove ambiguity and surface context you might forget.

3

LLM Agents

Specialized agents (Architect, Editor, Judge, etc.) craft multiple candidate prompts instead of relying on a single response.

4

Metrics & Scoring

Every candidate is scored on clarity, coherence, style match, safety, cost, and risk to quantify quality.

5

Outcome Runner

The Outcome-First runner applies your success criteria, compares best-of-N candidates, and returns the most reliable result.

🔧 Algorithm Specs

  • Structured specJSON-ready blueprint guides every subsequent stage.
  • Clarifying questionsRemoves guesswork before generating prompts.
  • Multi-agent orchestrationDifferent agents collaborate on architecture, editing, and judgment.
  • Best-of-N selectionWe compare candidates rather than returning the first hit.
  • Metric panelCoherence, style, pass rate, cost efficiency, and risk scores feed into the payoff.
  • Outcome RunnerPicks the winner according to your defined tests/criteria.

🚀 Every enhancement passes through the SpecQuestionAgentOutcome pipeline so the result you copy is the top-ranked candidate after multi-metric scoring.