Parallect.ai
Research

Creating research threads

Use the new research dialog to write strong queries, pick providers and budget, and choose a mode.

Creating research threads

Every research run in Parallect starts as a thread: your question, the providers you involve, how much you're willing to spend, and how deeply the system should work. The new research dialog brings those choices together in one place so you can start confident runs without guessing what happens next.

The research dialog at a glance

When you open a new research, you'll typically see:

  • Your query -- What you want answered or investigated.
  • Providers -- Which AI systems participate (each has different strengths).
  • Budget tier -- A cap for how much a single job can cost, which also shapes how many providers can run.
  • Research mode -- Fast for quicker results, or Methodical for more thorough, in-depth research.

You'll also get a cost preview tied to your tier and selections so you know the guardrails before you commit.

Writing a good research query

Strong queries share a few habits: they're specific, scoped, and honest about the output you need.

Do:

  • Name the audience or use -- e.g. "for a product manager deciding whether to integrate X."
  • Add constraints -- time period, geography, regulation, or "peer-reviewed only."
  • Ask for structure when it helps -- "compare A vs B in a table" or "list risks then mitigations."
  • Say what not to do -- "no marketing fluff" or "exclude vendor blogs."

Avoid:

  • One-word prompts ("blockchain") unless you want a very broad survey.
  • Hidden assumptions -- spell out what "best" or "successful" means for you.

Examples:

  • Weak: "AI in healthcare." Stronger: "Summarize FDA guidance on clinical decision support software (2023-2025), with citations; focus on liability and monitoring obligations."

  • Weak: "Should we use Kubernetes?" Stronger: "For a 5-person team running 3 stateless APIs on a single cloud region, compare Kubernetes vs managed container services on ops burden and cost; assume no dedicated platform team."

Choosing providers

Each provider has a distinct "personality" -- speed, citation style, depth, and data sources differ. A practical approach:

  • Start from what the decision needs -- quick scan vs. exhaustive review vs. social/real-time context.
  • Use more than one provider when stakes are high or you want disagreement surfaced before synthesis.
  • Reserve premium or reasoning-heavy options for questions where depth outweighs latency and cost.

You'll find a fuller tour in Choosing providers. For budget-driven defaults, your tier still sets how many providers can participate -- see Budget tiers.

Budget tier and cost preview

Tiers set a maximum spend for the job and influence how many providers take part by default:

TierCap (approx.)Default providers
XXS$11
XS$21
S$51
M$152
L$303
XL$60All

The dialog's cost preview reflects your tier and setup. You're charged actual usage, not automatically the full cap -- the tier is a ceiling. You can also override the default provider selection to include more or fewer providers within your tier's budget. See Budget tiers for how to pick the right one.

Choosing a research mode

  • Fast -- All providers work at the same time. Results arrive in seconds to a few minutes. Best for quick answers and iteration.
  • Methodical -- Providers work one at a time, each building on what came before. This produces more thorough, interconnected results. Expect roughly two to ten minutes depending on scope and providers.

Mode changes both time and cost; methodical runs take longer but can produce deeper analysis. Details: Research modes.

Next steps

On this page