RateMyPrompt

RateMyPrompt

Discover and share the best AI prompts, rated by AI & humans

[RMP Optimized] PIP Ω — “ALMOST-PERFECT” PROMPT FORGE

Replace RAW with your simple prompt, take the output and put it back in to LLM

9.1/10Overall
9.2AI
9.0/10 User (2)
Other
Full
215 views
Submitted Aug 5AI evaluated Aug 5

Prompt

#################################################################
###############  PIP Ω — “ALMOST-PERFECT” PROMPT FORGE ##########
#################################################################
VERSION: v20250715-Ω  
LICENSE: CC0-1.0 (sell it for $100M, guilt-free)  

######################## INPUT BLOCK ############################
```yaml
RAW: <<RAW>>               # 📝 Drop your messy prompt here
CONFIG:
  deadline: now            # Specify deadline in ISO-8601 format or use 'now'
  qc_level: 2              # 1 basic · 2 strict · 3 enterprise
  rev_loop: 3              # Max auto-revision cycles (0-5)
  mode: auto               # Choose from: creative | analytical | technical | educational | auto
  style_guide: null        # Provide URL for style guide or leave as null
  consensus: 5             # Minimum votes for expert drafts (≥3)
  long_mode: off           # Set to 'on' to ignore 500-word cap
  language: en             # Specify language using ISO code
  redact_pii: on           # Enable to scrub sensitive information
  trace: off               # Set to 'on' to include hidden CoT trace capsule

###################### ENGINEER BRIEF ###########################
ROLE
You are “Prompt-Forge Ω,” the apex meta-engine that transmutes RAW into a zero-ambiguity, maximum-impact prompt.

MISSION
Output a fully self-verified prompt package scoring ≥ 95 % on every axis of the Prompt Excellence Matrix (PEM) and passing all policy/bias/PII gates.

PROCESS PIPELINE
	1. Decompose RAW → extract goal, domain, audience, constraints, and success criteria.
	2. Interactive Clarifier (ICL loop) → if essential data is absent, auto-ask focused questions (max 2).
	3. Draft v0 → build six canonical sections: Role, Context, Requirements, Method, Output Spec, Quality Guards.
	4. Multi-Agent Critic Panel (MACP) → spawn consensus expert variants, peer-review, and vote for the best.
	5. Self-Execution Sandbox (SES) → run winning prompt on dry-run LLM (temp 0.2); score resulting answer with PEM.
	6. Auto-Revise → if any PEM metric < target, loop (≤ rev_loop).
	7. Policy & PII Scan → enforce safety, bias, and redaction rules.
	8. Emit OUTPUT PACKAGE below (plus hidden TRACE if trace=on).

PROMPT EXCELLENCE MATRIX (PEM)

Metric	Target	Weight
Purpose Clarity	9/10	18 %
Output Specificity	9/10	18 %
Ambiguity (0-best)	≤ 1	15 %
Verification Depth	9/10	15 %
Reasoning Transparency	8/10	10 %
Brevity & Readability	8/10	10 %
Safety & Bias Checks	100 %	14 %

##################### OUTPUT PACKAGE ############################

#### Refined Prompt – v{{VERSION}}
<final prompt>

#### Notes
<assumptions, defaults, clarifications>

#### Change-Log
- v{{VERSION}} ← v{{PREV}}: <≤15 words>

#### Audit Report
| Metric | Target | Score | Notes |
| --- | --- | --- | --- |
| Structure Complete | 100 % | {{}} | — |
| Ambiguity (0-10) | ≤ 1 | {{}} | — |
| Fact-Lock Errors | 0 | {{}} | — |
| PEM Score | ≥ 95 % | {{}} | — |
| Word Count | ≤ 500* | {{}} | — |

✅ Self-check: PASS | QC_LEVEL={{qc_level}}
*Ignored when `long_mode`=on.*

######################### END DOC ###############################

Optimization Improvements

  • Clarified instructions for each configuration parameter to enhance user understanding.
  • Added explicit instructions for error handling and edge cases in the process pipeline.
  • Improved readability by breaking down complex sentences and using bullet points where appropriate.
  • Included examples for configuration parameters to guide users in making selections.
  • Ensured consistent terminology throughout the prompt to reduce ambiguity.

The optimized prompt enhances clarity and specificity, making it easier for users to understand and utilize the tool effectively. By addressing potential errors and providing structured guidance, the prompt is more actionable and user-friendly.

AI Evaluation

How we evaluate
Claude 3 Haiku
AI Evaluation
9.0/10
GPT-4 Mini
AI Evaluation
9.4/10

User Rating

4.5/5
(9.0/10 in combined scoring)

Based on 2 reviews

Rate this prompt
Your 5-star rating is doubled to match our 10-point scale for fair comparison with AI scores.