At 9:07 a.m., my Slack lit up—again. This time, MCP Slack Automation saved me from hunting down answers in Confluence, Jira, or New Relic.
“Anyone know the JQL to count today’s Crave escalations?”
I saw versions of that question all day. The answers existed, but they were trapped—inside someone’s memory, in a confluence runbook with a cryptic title, or behind JQL/CQL/NRQL syntax only a few people could write. My goal with MCP was simple: turn plain English in Slack into safe, auditable answers so the whole team could self-serve without learning three query languages or spelunking wikis. Security set the boundaries (especially for write actions). Reliability determined whether people kept using it after week one.
Why I built it
My ops teammates lived across Jira, Confluence, New Relic, and—socially and operationally—Slack. A typical day meant flipping tools, translating natural questions into query languages, and DM’ing whoever remembered the right filter. We weren’t short on data; we were short on access. MCP became the bridge: accept a natural-language request and produce a deterministic, governed action—read or write—with receipts.
What I actually shipped
I wired three surfaces behind a Slack bot and focused on answering in the thread:
• Jira: A plain question like “How many *Crave* escalations today?” returns a count, a short pattern summary, and deep links to the issues—right in Slack.
• Confluence: Ask a plain-language question and get a condensed summary with links and owners. Example: “How do I contact X team?” → searches the space, finds the relevant team page/on-call rota/contact channels, and replies with a short summary plus links and owners.
• New Relic: Ask “Is service *foo* missing heartbeats this hour?” and get a status, a one-line diagnosis, and a dashboard link.
I don’t dump JQL/NRQL/CQL into Slack. Instead, I attach a “View details” button that opens an audit panel with the exact query, filters, and timing for people who want the receipts. Security and power users love that; everybody else gets a clean, readable answer.
Lesson 1 — Security isn’t a checkbox; it’s the product
Read-only tools were easy to approve. The moment I proposed writes—create a Jira ticket, open a Confluence page, change a status—the conversation moved from code to governance. My first security review killed a naive “create issue” endpoint in five minutes, and it was the best thing that could’ve happened.
Instead, I shipped something straightforward: every change first showed a Slack preview/diff, and nothing happened until a human approved it. I kept separate credentials for reading and writing, limited access to the right project or space, blocked risky actions, and tested in shadow mode by logging “would-have-done” actions before turning on real writes. This wasn’t just safer—it also built trust.
Lesson 2 — Templates beat free-form prompting
Letting the LLM “figure out the query” felt magical until it drifted into expensive or over-broad searches. I replaced that with a small catalogue of intents (“count escalations today,” “open Sev-1 last 24h,” “pages about in space X”), each mapped to a deterministic query template. The model only filled slots (labels, service names, time ranges), and a validator enforced allowed fields, operators, and windows.
It wasn’t as romantic as free-form prompting, but it was fast, cheap, and—most importantly—predictable. If a query failed, I returned a concise error and a safer alternative you could click.
Lesson 3 — Meet people where they are, and show your work (without clutter)
The single biggest adoption driver was answering in the same Slack thread where the question began. I kept the reply focused on results—counts, summaries, charts, links. For transparency, the details panel shows the exact JQL/CQL/NRQL and timing. That split kept day-to-day messages readable while giving security and power users everything they needed.
Lesson 4 — Confluence recall is hard; boost it with fan-out + synthesis
CQL is unforgiving. One weak keyword and you miss the runbook. I added a tiny ADK agent that generates several keyword variants, runs multiple searches in the right space, de-duplicates, and summarizes the top pages. Practically, questions that used to spiral into “do you remember that doc?” became 10-second answers with links and owners—again, returned as a summary in Slack, not raw CQL.
Lesson 5 — Reliability is a feature of MCP Slack Automation
I learned this the day I accidentally returned 3,000 Jira issues into a Slack thread. After that, everything had limits: cap results, page large sets, and time-box by default (today, last 2h). I cached common reads briefly, used idempotency keys on writes, and watched metrics for latency and error spikes. When things failed, I didn’t shrug—I returned a short explanation and a safer next step.
A day in the life, post-MCP
By mid-morning, someone asks, “Crave escalations today?” They get a number, a two-line trend read, and a link to the issues. Around lunch, a teammate wants to know how to contact X team. The bot searches across Confluence, finds the relevant team page with contact details, and replies with a short summary plus links and owners. Later, an on-call asks about heartbeats for service foo. A templated check returns status and a dashboard link. If someone wants to publish a shift summary to Confluence, the bot drafts it and posts a preview for approval.
None of this removes human judgment; it removes the busywork between a question and a trustworthy answer.
What changed by doing it this way
Time-to-answer fell from minutes to seconds. Onboarding sped up because people didn’t need to learn three query languages before being useful. The shape of work changed, too: instead of hunting syntax, we spent time reading patterns and deciding what to do next. Security slept better because every sensitive move had a preview, an approver, and a log in the details panel.
If I were starting again tomorrow
I’d do it in the same order: ship read-only wins first to earn goodwill. Design the approval UX before enabling any writes. Publish my intent catalogue as a living document—the product surface of the bot—and treat logs as if they’ll one day be a compliance exhibit. They might.
Closing thought
Building MCP servers wasn’t about wiring APIs; it was about productizing how ops work actually happens. The tech was straightforward. Earning trust—through transparency, guardrails, and fast, reliable answers—made it stick. If you copy just one thing from my approach, copy the receipts: keep results front-and-center, and tuck the raw queries behind a View details link.