6 min read

Colecia: when the AI agent team builds itself the moment you ask

Existing multi-agent frameworks keep a fixed team. Colecia generates experts on the fly — and lets tensions surface.

Imagine asking a large language model:

“What are the risks and opportunities of entering the German market for a French fintech?”

The answer is fluent, well-structured, but too confident. A single point of view, no tension between the economic, regulatory, and cultural dimensions. In reality, expertise emerges from debate, not unanimity.

Current multi-agent frameworks (CrewAI, LangGraph) tried to fix this by adding more brains. But their mistake is keeping a fixed team (planner, executor, critic). The result: agents step on each other's toes, repeat the same angles, and blow up the token bill.

Colecia takes a different approach: build the team at the moment the question is asked, like convening experts for an emergency meeting.

1. Generating experts on the fly

Before answering, a meta-agent analyzes your query to determine its complexity and the domains involved. It then decides how many agents are needed (typically between 2 and 8) and generates their profiles on the spot.

Each agent receives an identity card (JSON) that strictly defines its scope. The most important field is analysis_boundaries:

json
{
  "name": "Fintech Regulatory Analyst",
  "role": "Expert in compliance and financial services law",
  "expertise": ["PSD2", "Fintech GDPR", "BaFin", "German banking law"],
  "analysis_boundaries": "COVERS: licenses, compliance, BaFin requirements | DOES NOT COVER: commercial strategy, market positioning"
}

This explicit instruction prevents redundancy. The regulatory agent won't touch commercial strategy, and vice versa.

2. Coordination through stigmergy (the intelligence of ants)

In an ant colony, insects don't talk to each other directly — they leave pheromones for others to read. Colecia applies this principle through a shared environment.

Before responding, each agent consults a “WHO COVERS WHAT” board:

No direct messages are exchanged between agents. They adjust their behavior based on this board. This ensures complementary coverage without costly sequential exchanges.

3. Lightweight metacognition: observe without judging

After this first round, a small observer agent analyzes the responses produced. It doesn't assign scores — it writes textual observations:

These signals are passed to the final synthesis to highlight real disagreements.

4. The result: an answer that genuinely debated

Let’s compare what different approaches produce on our original question:

ApproachStructureAdded value
LLM aloneSmooth, single voiceLittle insight, “anything is possible”.
Fixed swarmRedundant, similar anglesLots of noise, little signal.
ColeciaMultidimensional, explicit tensionsDetection of opportunities vs. risks, identification of missing angles.
“The economist identifies an under-penetrated market (opportunity), but the strategist highlights a difficult post-2023 timing (increased competition). The regulatory analyst warns about BaFin and GDPR compliance costs. No agent addressed local hiring, an unevaluated operational risk. Finally, the term “French SME” is interpreted two ways: limited resources vs. niche expertise — creating two readings of the question.”

This isn’t a longer answer. It’s a more honest one.

Why this changes the game

For decision-makers, having an answer that exposes tensions and overlooked angles makes it possible to:

We move from a simple AI-generated opinion to a genuine map of disagreements.

Coming soon

Try Colecia yourself

We're looking for early adopters in fintech, healthcare, and industrial R&D.