Choosing tools for a support team: a decision framework
Five categories of tools every support team needs, the questions to ask when evaluating each, and the traps to avoid.
Tooling decisions for support teams are sneaky. On the surface they look like straightforward software evaluations — compare features, pick one, roll out. In practice, a bad tooling choice shapes your team's habits for years and is painful to reverse. A good one compounds quietly in the background.
This post is a framework for thinking about the full support tool stack — five categories, the questions to ask each one, and the traps teams keep falling into.
Category 1: Ticketing / case management
The inbound system. Freshdesk, Zendesk, Intercom, Help Scout, Jira Service Management. Everyone has one. Usually chosen early, usually hard to change.
Questions to ask
- Does it have a solid API with webhook events? You'll want to integrate everything else through it.
- How does it handle custom fields? You'll need them for product area, severity, escalation tier.
- How does it handle multi-tenancy if you have multiple products or brands?
- Can agents use it on mobile without wanting to throw their phone?
The trap
Over-optimizing for the customer-facing portal and under-optimizing for the agent experience. Your customers will use the portal for 30 seconds. Your agents live in the tool all day. Weight accordingly.
Category 2: Knowledge base / runbooks
Where your team's expertise lives. Most teams default to Confluence, Notion, or the built-in KB of their ticketing tool. All three are suboptimal for support-specific workflows.
Questions to ask
- Does it support structured runbooks (fixed fields), or just free-form pages?
- Is search semantic, or keyword-only?
- Can you surface KB articles inside your ticketing tool without copy-paste?
- Can agents save a solved case as a runbook in one click?
- Does it track which articles are being used (view counts, case links)?
The trap
Choosing a general-purpose docs tool because 'we already have a license.' Confluence is great for product docs and meeting notes; it's a poor fit for support runbooks that need to be found mid-case. A purpose-built tool (Momentum is ours, others exist) compounds faster.
Category 3: AI-assisted drafting
The newest category. Agent-facing AI that drafts replies grounded in your KB. Done well, a 2-3x speedup on routine replies. Done poorly, a tax on your agents who have to clean up bad drafts.
Questions to ask
- Does it draft using your team's voice, or a generic corporate voice?
- Can you pick tone and reply-type per draft?
- Is PII redacted before it hits the model?
- Does it ground drafts in your KB and past cases, or just in the raw ticket text?
- Can agents edit the draft inline, or do they have to copy-paste from a modal?
The trap
Turning on AI drafting before the KB is in shape. The AI can only be as good as the corpus it retrieves from. If your runbooks are stale and your past cases aren't indexed, AI drafting will produce confident, hallucinated nonsense.
Category 4: Team communication
Slack, Teams, Discord, whatever your org uses. Support teams have specific needs here — shift handoffs, escalation channels, out-of-band customer coordination.
Questions to ask
- Can you link from a ticket to a Slack thread (and vice versa) without copy-paste?
- Are you losing institutional knowledge in Slack that never makes it into the KB?
- Do you have a dedicated escalation channel, or do escalations get lost in the general chat?
The trap
Letting Slack become your de facto knowledge base. Slack is optimized for real-time, not for discovery. Every 'hey, remember that thing you figured out last quarter?' is a signal your KB is failing.
Category 5: Observability / data
Datadog, Sentry, Grafana, your product's internal admin panel. The tools your agents use to diagnose customer issues.
Questions to ask
- Can agents run common lookups (customer ID → account state) without filing an engineering ticket?
- Are error logs searchable by customer or just by timestamp?
- Can an agent reproduce a customer's view for debugging?
- Are API rate limits, integration health, and sync status visible to support?
The trap
Not giving support direct access to observability tools because 'that's engineering's stuff.' This creates a ticket-tax where every diagnosis requires an engineering hop. Support that can self-serve basic diagnostics closes cases hours faster.
How to sequence adoption
If you're building a stack from scratch, adopt in this order:
- Ticketing first. Everything else integrates through it.
- KB second. Populated with structured runbooks, not prose wikis.
- Observability access third. Reduces ticket-hop tax on routine diagnostics.
- AI drafting fourth. Only after the KB is in shape — the AI is only as good as the corpus.
- Team comms last. Slack/Teams is probably already in place; the optimization is integrating it with everything above, not replacing it.
The meta-trap
Buying a tool to solve a process problem. If your team isn't writing runbooks, a new KB tool won't make them start. If your escalations drop context, a new ticketing system won't fix the handoff template. Tools amplify existing habits, good and bad. Fix the habit first, then pick the tool that makes the good habit easier.
This is the single most common way support teams waste money on tooling. Avoid it by naming the specific behavior change you want, confirming the tool will support that change, and pilot-running with a small team for 4-6 weeks before rolling out.
Structured runbooks, semantic search, AI-drafted replies, live ticket integrations. Free to start. Set up in under a minute.
Get started