unsubbed.co

xyOps

XyOps lets you run workflow automation and server monitoring entirely on your own server.

A complete ops stack as a single self-hosted install — honest look at what you get and what you give up.

TL;DR

  • What it is: BSD-3-Clause-licensed platform combining job scheduling, visual workflow automation, server monitoring, smart alerting, and incident response in one cohesive system [README].
  • Who it’s for: Developers and ops teams running fragmented stacks — cron here, monitoring there, alerting somewhere else — who want them collapsed into one self-hosted platform without surrendering data or flexibility [1][README].
  • Cost savings: Free tier includes all app features. Pro and Enterprise tiers exist for support, but prices aren’t publicly listed — requires contacting the team.
  • Key strength: The integrated feedback loop. When a job fails, the alert includes what was running on that server. One click opens a system snapshot — processes, CPU load, network connections — taken at the moment the alert fired. The connection between execution and observability is built-in, not bolted on [1][README].
  • Key weakness: Single listed contributor on GitHub, no visible user reviews, and opaque paid tier pricing. For a tool you’d bet a production ops stack on, that combination warrants scrutiny before committing.

What is xyOps

xyOps is an open-source platform that treats monitoring and job execution as two sides of the same coin. Most automation tools do one thing: run tasks. xyOps runs tasks and connects each execution to live monitoring, alerting, server snapshots, and ticketing — creating what the README calls “a single integrated feedback loop” [README].

The pitch is best understood through the problem it’s solving. An alert fires. You open your monitoring dashboard. You open your job scheduler. You open your ticketing system. None of them share context. You spend twenty minutes correlating timestamps before you even start debugging. xyOps’ answer: when that alert fires, it already knows which jobs were running, it’s already captured the process list and CPU state, and it can open a ticket with all of that attached [README][1].

The project is built and maintained by pixlcore, runs on Node.js, ships as a Docker image from GitHub Container Registry, and is licensed BSD-3-Clause — no commercial restrictions on self-hosting, forking, embedding, or redistribution [README]. It has 3,579 GitHub stars and 361 forks. The project’s web presence is its GitHub page and docs.xyops.io — there’s no separate marketing site, which tells you something about who the current audience is.


Why people choose it

One substantive third-party write-up exists to draw from: a February 2026 piece from systemadministration.net [1]. The absence of multiple independent reviews is itself meaningful data — xyOps has meaningful GitHub traction but hasn’t generated the Reddit threads, G2 reviews, or comparison blog posts that cluster around tools like n8n or Rundeck. You’re mostly working from first-party material. Keep that in mind.

The integrated ops story is the actual draw. The systemadministration.net piece [1] frames the core value accurately: the pain isn’t that any single ops tool is bad, it’s that they don’t talk to each other. Engineers bounce between dashboards during incidents because the job runner doesn’t know what the monitoring system knows, and neither knows what the ticketing system needs. xyOps bets that collapsing these into one system speeds up incident response and reduces the “tribal knowledge” problem — where only the person who wrote the cron jobs six months ago understands what’s running and why [1].

The BSD license is a real differentiator. Most comparable self-hosted tools use AGPL, proprietary licenses, or variants of “fair-code” that restrict commercial redistribution. BSD-3-Clause gives you genuine freedom: embed xyOps in your own product, white-label it for clients, redistribute modified versions — without a legal conversation [README].

Beyond-cron scheduling with visual workflows. Cron can’t do branching, event-driven triggers, or conditional actions. xyOps’ visual workflow editor lets you build pipelines that are explicit and inspectable — something you can hand off to a new team member without a documentation session [1][README].

What’s harder to assess without independent user reviews is whether the integrated approach holds up under real production load or whether coupling all these systems creates new failure modes. That’s an open question.


Features

Based on the README and the systemadministration.net analysis [1]:

Job scheduling:

  • Scheduler positioned as “way beyond cron” — supports event-driven triggers, conditional execution, branching logic, and pipeline construction [README]
  • Built for fleet-wide job management from a single interface, from five servers to five thousand [README]
  • Per-job performance tracking and execution history [1]

Visual workflow editor:

  • Drag-and-connect UI for building pipelines from events, triggers, actions, and monitors [README][1]
  • The design goal is maintainability: workflows are explicit and reviewable rather than a directory of undocumented shell scripts [1]

Server monitoring:

  • Operator-defined monitoring targets — you specify what to watch, not a fixed preset [README]
  • Point-in-time server snapshots: processes, CPU load, network connections captured when alerts fire [README][1]
  • Snapshots are linked to the alerts that triggered them, preserving state for post-incident analysis [1]

Smart alerting:

  • Complex trigger conditions, not just simple thresholds [README][1]
  • Alert notifications include which jobs were running on the affected server at alert time [1]
  • The explicit goal: fewer meaningless pings, more context in the first message [1]

Incident response:

  • Automatic ticket opening on job failure, with logs, history, and linked metrics attached [README][1]
  • Private ticketing system available on Pro and Enterprise tiers [README]

Fleet and deployment:

  • Docker-native install with a single docker run command to a working instance [README]
  • Docker socket mount for container-level monitoring [README]
  • SSO setup and support on Enterprise tier only [README]
  • Air-gapped installation support is Enterprise-only [README]

One gap in the available data: there’s no published integration catalog. No list of what external services xyOps connects to natively, no webhook ecosystem documentation, no plugin registry. If your use case requires xyOps to push notifications to Slack, pull metrics from an external API, or trigger actions in third-party tools, you’d need to read the docs at docs.xyops.io to find out what’s actually supported.


Pricing: SaaS vs self-hosted math

Three tiers, per the README:

Free (Community):

  • All app features
  • Community support via GitHub
  • Open source forever

Professional:

  • All app features
  • Professional support
  • Private ticketing system
  • 24-hour response time on support tickets
  • Price: not publicly listed — see https://xyops.io/pricing

Enterprise:

  • All app features
  • Enterprise support + live chat
  • SSO setup and support
  • Air-gapped installation support
  • 1-hour response time on tickets
  • Price: not publicly listed

No dollar figures for Pro or Enterprise are available in the materials reviewed for this article. If you’re evaluating xyOps for a production deployment and need to budget for support, you’ll need a direct conversation with the team. The free tier’s promise — all features, no gating — is meaningful, but what it doesn’t include is meaningful too: SSO, professional support, and air-gapped installs all require paying for tiers with undisclosed pricing.

Self-hosted free tier cost baseline:

  • Software: $0 (BSD-3-Clause)
  • VPS to run it: $6–20/month depending on fleet size and monitoring load
  • Setup time: your own or a one-time deployment engagement

Comparable SaaS stack costs (for framing):

  • Rundeck Cloud: starts around $50/month
  • PagerDuty alerting: $21–36/user/month
  • A combined scheduling + monitoring + alerting stack can run $200–500/month for a small team before hitting usage ceiling issues

The self-hosted free tier delivers all three of those functions for the price of the VPS. If xyOps does what it says, the cost math is obvious. The unknowns are configuration overhead and ongoing maintenance burden — not negligible for non-technical teams.


Deployment reality check

The README’s one-liner Docker command is genuinely short:

docker run --detach --init --restart unless-stopped \
  -v xy-data:/opt/xyops/data \
  -v /local/path/to/xyops-conf:/opt/xyops/conf \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e TZ="America/Los_Angeles" \
  -p 5522:5522 -p 5523:5523 \
  --name "xyops01" ghcr.io/pixlcore/xyops:latest

Default login is admin/admin — change this immediately. The Docker socket mount (/var/run/docker.sock) is required for container monitoring but grants the container significant host access; understand that surface before running in production [README].

What a real setup requires:

  • A Linux VPS — minimum RAM not specified in available docs, but a combined scheduler + monitoring daemon running against a real fleet will want at least 2–4GB
  • Docker installed
  • A reverse proxy (Caddy or nginx) for HTTPS — not bundled
  • A host-side config directory
  • Time to define your monitoring targets, job schedules, and alert rules

Honest risks:

The single-contributor situation is the most significant flag. pixlcore is listed as the sole contributor on GitHub. The project has 2,456 commits and 111 releases — it’s active — but one person’s availability governs the project’s entire future. BSD license means you can fork it if the project goes quiet, but maintaining a fork of an ops platform is not a trivial proposition.

Feature PRs are explicitly not accepted. The contributing guide states this directly. If you need a capability the core doesn’t ship, you wait, file an issue and hope, or maintain a private fork [README].

Community support means GitHub issues. There’s no visible Discord, active GitHub Discussions, or Reddit community in the available data. The [1] source doesn’t surface any community complaints about deployment, which either means it’s smooth or means there aren’t enough users writing about it publicly yet.

SSO and air-gapped installs being Enterprise-only matters for regulated environments and larger teams with standard identity management practices [README].


Pros and cons

Pros

  • BSD-3-Clause license. Genuine open-source freedom — embed, fork, redistribute, white-label. No commercial restrictions [README].
  • Integrated feedback loop. The core architecture — job failures producing alerts with server snapshots and linked tickets — addresses a real pain that most monitoring and scheduling tools punt on [1][README].
  • All features in free tier. Scheduling, monitoring, alerting, incident response, fleet management — all available without paying. You pay for support, not features [README].
  • Visual workflow editor. Pipelines are inspectable and maintainable by people who didn’t write them [1][README].
  • Fleet-native from the start. Not an afterthought — multi-server orchestration is in the core design [README].
  • Simple Docker deployment. Single command to a working instance. The install path is not the blocker [README].

Cons

  • Single listed contributor. One person’s availability governs the project’s roadmap, release cadence, and survival. High bus factor risk for production use.
  • No feature PRs accepted. External contributions are explicitly limited. If you need something the core doesn’t support, you’re waiting or forking [README].
  • Opaque paid tier pricing. Pro and Enterprise prices aren’t published. You can’t assess total cost of ownership for production without a sales conversation [README].
  • No visible independent user reviews. One third-party article [1] is the extent of external coverage available. No Trustpilot, G2, or community discussion to cross-reference.
  • SSO and air-gapped support are Enterprise-only. Standard for larger teams; a hard blocker if you need SSO and can’t budget Enterprise [README].
  • No integration catalog. What external services xyOps connects to natively is undocumented in available materials. Integration breadth is unknown.
  • Monitoring configuration is opaque. “Define exactly what you want to monitor” is the stated model, but what monitoring primitives exist (uptime checks, log watching, metric thresholds, port probes) isn’t enumerated in the README.

Who should use this / who shouldn’t

Use xyOps if:

  • You’re running a server fleet and currently juggling separate cron scripts, a monitoring agent, an alerting tool, and manual incident response — and you want them as one system.
  • BSD licensing matters: you want to embed or redistribute without commercial licensing friction.
  • You have a developer or sysadmin who can handle Docker deployment and initial configuration.
  • Your current alerting gives you notifications with no context — you want alerts that show you what the server was doing when the trigger fired.
  • All-features-in-free-tier is a priority over commercial support guarantees.

Skip it if:

  • You’re a non-technical founder with no technical person to lean on. The deployment is Docker-simple, but configuring monitoring targets, job schedules, and alert rules requires ops literacy.
  • You need a large integration catalog. If your automation involves connecting to Salesforce, HubSpot, Stripe, or dozens of SaaS tools, xyOps isn’t the right choice — look at n8n or Activepieces.
  • Single-maintainer bus factor is a hard no for your organization’s risk posture. It should be at minimum a documented risk in your evaluation.
  • Your team requires SSO and you aren’t budgeting for Enterprise tier.
  • You need only workflow automation without monitoring/alerting. The tool’s value is in the integration — without needing the monitoring side, you’d be underutilizing it.

Consider the combination of Uptime Kuma + n8n instead if:

  • You want two mature, widely-adopted open-source tools with large communities, even at the cost of separate systems. Two well-supported specialized tools may carry less operational risk than one integrated tool from a single maintainer.

Alternatives worth considering

Tools that overlap with parts of xyOps’ scope:

  • Rundeck — the established open-source job scheduler. More mature, larger community, well-documented. Doesn’t do monitoring natively but integrates with standard monitoring stacks. The direct alternative for the scheduling piece.
  • Apache Airflow — Python-based workflow orchestration. Massive in data engineering, large ecosystem, steeper learning curve. Not a monitoring tool.
  • n8n — open-source workflow automation with 200+ integrations. Better for connecting external SaaS tools; no server monitoring.
  • Uptime Kuma — open-source uptime monitoring with clean UI and solid alerting. Does one thing well; no job scheduling or incident response.
  • Grafana + Prometheus — the standard self-hosted monitoring stack. Extremely powerful, massive ecosystem, significantly more complex to configure than xyOps.
  • Netdata — lightweight real-time monitoring agent. Very easy to deploy, strong out-of-the-box coverage, no scheduling.
  • Temporal — open-source durable workflow orchestration. More engineering-heavy, more robust for complex distributed workflows.

The honest competitive comparison for xyOps’ specific pitch — integrated scheduling + monitoring + incident response in one self-hosted install — is a custom stack of Rundeck + Prometheus/Grafana + PagerDuty. xyOps is cheaper and simpler than that combination. The trade-off is maturity and community depth.


Bottom line

xyOps makes a coherent architectural bet: the real pain in ops tooling isn’t any single capability, it’s that job scheduling, monitoring, alerting, and incident response live in separate systems with no shared context. The integrated feedback loop — job failure triggers an alert that carries server state and opens a pre-populated ticket — is a genuinely useful idea and not something you get out of the box from Rundeck, n8n, or Grafana alone.

The risks are real and shouldn’t be minimized. A single maintainer, no public community, and undisclosed paid tier pricing add up to a tool that requires a thoughtful pilot before it’s production-critical. Run it for two or three months on a non-critical workload, watch the GitHub commit cadence, and have a migration plan ready. If it delivers on the integration promise, the cost math against a multi-tool SaaS stack is obvious. If the single-maintainer scenario materializes, BSD license means you’re not stranded — just doing more maintenance than you planned.

If deploying and configuring the stack is the blocker, upready.dev handles one-time self-hosted deployments like this as a fixed-fee engagement.


Sources

  1. systemadministration.net“xyOps: the open-source ‘all-in-one’ platform that combines workflow automation, server monitoring, and incident response” (Feb 6, 2026). https://systemadministration.net/xyops-open-source-all-in-one-platform/

Primary sources:

Features

Authentication & Access

  • Single Sign-On (SSO)

Automation & Workflows

  • Scheduled Tasks / Cron
  • Triggers / Event-Driven
  • Workflows