Brickslasher scans your workspace in under 5 minutes and tells you exactly what's wasting money — idle clusters, zombie jobs, runaway GPU spend, and 100+ other patterns.
Most teams discover they've been overspending by accident — usually when the bill arrives. Brickslasher finds waste before it compounds.
Idle all-purpose clusters running overnight, oversized driver nodes, GPU clusters used for non-GPU workloads, and auto-termination gaps. The most common single source of unnoticed spend.
Jobs without timeouts, runaway runtimes, high failure rates, all-purpose compute used for batch jobs, and cron schedules firing dozens of times an hour.
High interactive compute ratio, accelerating spend trends, on-demand vs spot opportunity, and DBU growth patterns that signal runaway workloads.
Resources owned by personal email accounts, missing tag coverage, classic SQL warehouses still in use, and unsupported runtime versions creating support risk.
GPU clusters left running post-training, oversized instances for inference workloads, and model serving resources without proper auto-scaling configured.
Creator concentration risk, duplicate cron schedules, job timeout anomalies, warehouse configuration inefficiencies, and cluster policy compliance gaps.
These are actual detection rules from Brickslasher — not marketing copy. Every finding comes with a resource name, explanation, and fix.
Adjust the sliders to match your workspace. Most teams find enough waste to justify the investment 5–10× over.
Every scan checks 112 rules. New waste patterns get flagged the moment they appear — not weeks later when the bill arrives.
Each finding includes the resource name, exact dollar estimate, and a one-line fix recommendation. No ambiguity, no guessing.
Track resolved findings across scans. Show finance exactly how much was recovered — documented, exportable, repeatable.
Connect a read-only token, run a scan, get prioritized findings with dollar estimates — no configuration, no agents, no data movement.
Paste your workspace URL and a read-only Service Principal or PAT token. Brickslasher never modifies anything — it only reads cluster, job, and billing metadata.
112 detection rules fire across clusters, jobs, billing, workspace config, and AI compute. Each rule generates a prioritized finding with resource names and dollar estimates.
Findings land in your dashboard with severity, fix recommendations, and savings estimates. Assign to team members, set status, and track ROI as issues get resolved.
Clusters, jobs, billing, workspace governance, AI/ML compute — every category of waste covered out of the box, zero configuration.
Every finding includes an estimated annual savings figure — not vague guidance, actual numbers tied to your usage patterns.
Critical findings auto-fire to your team in real time. Weekly digest keeps everyone aligned without manual reporting overhead.
Track how findings change across scans. See the ROI of resolved issues as your spend drops over time — exportable for leadership.
Set daily or weekly automated scans. New waste gets detected the moment it appears, not when the bill arrives next month.
Assign findings to team members, add comments, track fix status. Built for engineering teams, not just finance and FinOps.
Secure, no-expiry authentication via Service Principals. No broad PAT tokens, no manual rotation, no stored credentials.
Write YAML-based custom detection rules for your team's specific patterns, naming conventions, and cost policies.
Brickslasher connects to your Databricks workspace using read-only API credentials. We never store your workspace data — every scan runs in your session and results are written only to your isolated account.
Brickslasher only ever reads workspace metadata. No write permissions requested. No data pipelines, notebooks, or clusters are ever modified.
Cluster configs, job definitions, and billing summaries are processed in-memory during the scan and never written to Brickslasher servers.
All API communications are TLS 1.2+. Stored credentials use AES-256 encryption with per-tenant keys. Secrets are never logged or exposed.
Every scan, finding status change, and team action is logged with timestamp and user identity. Exportable for compliance reviews.
We work directly with data engineering teams and FinOps leads at companies with serious Databricks spend. Every customer gets full platform access from day one.
We review your Databricks environment together — workspaces, spend range, team structure, and what's causing the most pain. No commitment required.
We run Brickslasher against your real environment. You see actual findings with dollar figures from your own data. No synthetic demos, no mock data.
Contract signed. Full access provisioned. Scheduled scans configured. Your team is onboarded the same week — not months later.
Talk to our team. We'll show you exactly what Brickslasher finds in your environment — before you sign anything.
No. Workspace metadata (cluster configs, job definitions, billing summaries) is processed in-memory during the scan. Finding results (rule IDs, resource names, savings estimates) are stored in your isolated account so you can track remediation — but raw Databricks data is never persisted on our end.
Read-only access only. You can use a Personal Access Token or a Service Principal with CAN_VIEW on clusters and jobs, and SELECT on system tables (billing.usage, compute.clusters). We never request write permissions. You can scope the Service Principal to a single workspace if preferred.
Reach out to sales@brickslasher.org. We'll set up a discovery call, run a scoped pilot against your real environment, and get you fully onboarded the same week the contract is signed. No waiting, no slow rollouts.
Yes — our standard process includes a pilot scan against your real Databricks environment before you commit. You'll see actual findings with dollar figures from your own data, not a synthetic demo. We believe the numbers speak for themselves.
2–5 minutes for most workspaces. We pull cluster and job metadata via the Databricks REST API (Tier 1, ~30 seconds) and billing system tables via SQL warehouse (Tier 2, ~1–2 minutes depending on data volume). Workspaces with Unity Catalog enabled get deeper cost attribution.
Yes — Brickslasher works with AWS, Azure, and GCP Databricks deployments. The Databricks REST API and system tables are consistent across clouds. Some rules (like spot instance detection) include cloud-specific logic for AWS and Azure. GCP spot detection is in active development.
Yes. Every rule has a unique ID and can be disabled per workspace. All customers also get access to custom YAML rules — you can write your own detection logic using the same framework as the built-in rules. Disabled rules are remembered across scans.
Same week as contract signing. Access is provisioned immediately. We help you connect your first workspace, configure scheduled scans, and set up Slack alerts in a single 30-minute session. Your team is running autonomous scans before the week is out.
We show you what Brickslasher finds in your real environment before you sign anything.