Troubleshooting

If something doesn’t work as expected, here’s how to fix it quickly.


⚠️ Issue: “My prompt used the wrong model”

Possible Causes:

  • Prompt classification detected a different intent

  • Manual override wasn't recognized

Fixes:

  • Specify clearly: “Use GPT-4o for this.”

  • OR disable the override: “Use the default model for this.”


⚠️ Research output has no citations

Perplexity may ignore formatting and source citation instructions under model stress.

Fixes:

  • Add: “Be sure to include sources.”

  • OR use "research-only" tasks

  • Check whether model override forced use of GPT instead of Perplexity

(Note: I am aware of issues / quirks with Perplexity's API that frequently prevent the output from clearly listing sources / URLs. I will continue to attempt workarounds and monitor for updates)


⚠️ Audio speaks the system prompt / unwanted text

This usually means the audio step used the full composed prompt.

Fix: Add quotes:

“Read aloud only: ‘This is the text to speak.’”

Or rely on dependent text_generation → audio steps.


⚠️ Image didn’t generate

Common causes:

  • DALL·E content restriction

  • Too abstract or unsafe

Fix: Rephrase with safe, allowed content.


⚠️ Slow output

Try shorter prompts or prompts involving fewer steps. Some types of tasks (i.e. audio generation) will naturally take longer than others. While most workflows (approx. >90%) are completed in mere seconds, a few more complicated requests could take up to 5 minutes.


⚠️ Errors in multi-step workflows

Dependencies may be misinterpreted.

Fix: Add explicit sequencing terms:

  • “First… then…”

  • “After generating the text…”


🛠 When to Contact Support

If something feels broken or you have suggestions / feedback, please email: 📩 [email protected]


Last updated