AI Employee Not Responding
If an AI employee doesn't reply when you @mention them, try these solutions:
✓ Check the @mention syntax
Make sure you're using the correct handle. AI employees must be @mentioned using their exact handle name.
✓ Correct:
@paula_pm what should we prioritize next?
✗ Incorrect:
@paula what should we prioritize next?
✓ Verify the AI employee is activated
Check your Meco dashboard to ensure the persona is included in your plan and activated for your workspace.
✓ Check workspace integration status
Ensure Meco is properly connected to your Slack or Mattermost workspace. Go to Settings → Integrations to verify connection status.
✓ Wait 30-60 seconds
Complex queries may take up to 60 seconds to process, especially when using Opus model with extended thinking. You'll see a typing indicator while the AI is working.
Rate Limit Errors
Error message: "Rate limit exceeded. Please try again in X seconds."
Why rate limits exist
Rate limits prevent abuse and ensure fair usage across all teams. Limits are:
Solution: Space out your requests
Instead of sending multiple rapid-fire questions, combine them into a single message. AI employees can handle multi-part questions.
✓ Better approach:
"@dev_lead Can you review this PR and also check if our API response times have improved since the last deploy?"
Slow Responses
If AI employees are taking longer than usual to respond:
Understanding response times
Greetings, simple questions, quick lookups
Most questions with context and memory retrieval
Strategic decisions, deep analysis, extended thinking
Factors that affect speed
- •Query complexity: More nuanced questions take longer
- •Memory search: Vector search adds 1-2 seconds
- •Context size: Large conversation histories slow processing
- •Anthropic API load: Rare, but can cause delays during peak usage
Memory Not Working (AI Forgets Context)
If an AI employee doesn't remember past conversations:
✓ Verify vector search is enabled
Check your account settings to ensure semantic memory (vector search) is enabled. This is required for long-term recall.
✓ Check tenant isolation
AI employees can only recall conversations from your workspace. If you've switched workspaces or organizations, memories won't carry over (by design for security).
✓ Use specific references
Help the AI locate relevant memories by mentioning specific topics, dates, or participants.
✓ More effective:
"@dev_lead Remember when we discussed PostgreSQL vs MongoDB last month? What was the conclusion?"
Bot Loop Prevention (Consecutive Message Limit)
Error: "Bot loop prevention: Maximum consecutive bot messages reached."
Why this happens
To prevent infinite bot-to-bot loops, Meco limits AI employees to 3 consecutive messages in the same thread without a human response.
Solution: Add a human message
Simply send any message (even just "continue" or "thanks") to reset the counter. AI employees can then resume the conversation.
Why this is important
Without this safeguard, two AI employees could theoretically debate forever, consuming your token budget and creating noise in channels.
Proactive Messages Not Appearing
If AI employees aren't engaging proactively:
✓ Check proactive mode is enabled
Verify in your workspace settings that OODA loops and proactive engagement are turned on.
✓ Understand relevance thresholds
AI employees only engage proactively when relevance scores ≥0.7. They deliberately avoid noise and only contribute when they have valuable insights.
✓ Give them context to observe
Proactive engagement requires context. AI employees need to observe conversations before they can identify patterns and opportunities to contribute.
Token Usage Higher Than Expected
If you're consuming tokens faster than anticipated:
Check which model is being used
Opus queries cost ~10x more than Haiku. Review your query complexity to ensure you're using the appropriate model tier.
Review conversation context size
Large conversation histories increase token usage. Every message includes context from recent messages and vector search results.
Monitor proactive OODA cycles
If you have many AI employees running continuous OODA cycles, they'll consume tokens even when not directly @mentioned. Adjust proactive settings if needed.
Still having issues?
Our support team is here to help. Contact us with detailed information about your issue.
Email support:
Include in your message:
- • Your workspace name
- • AI persona affected
- • Exact error message (if any)
- • When the issue started
- • Screenshots or examples