Back to Lab
RAXXO Studios 8 min read No time? Make it a 1 min read

PostHog Error Tracking Killed My Sentry Bill

Developer Tools
8 min read
TLDR
×
  • Sentry cost me 80 EUR/month for ~12k errors that PostHog now catches on the free tier
  • posthog-js already loaded on the storefront, so error tracking was a 4-line config change
  • Source maps upload in ~3s through a Vite plugin, stack traces stay readable
  • Tool count dropped from 2 to 1, and OpenTelemetry handles the gaps PostHog leaves

I was paying 80 EUR/month to Sentry for a one-person studio shipping ~12k errors per month, while PostHog already ran in the same page for analytics and session replay. Once I noticed PostHog had quietly shipped a real Error Tracking product, the math was insulting. One evening later, Sentry was gone and my observability stack was a single SDK.

The Sentry Bill That Stopped Making Sense

I started with Sentry years ago because it was the obvious choice. Errors come in, they get grouped, you fix them, you move on. The Team plan at 80 EUR/month felt fine when I was on a Pro consulting contract. As a solo studio, every recurring line item gets re-evaluated, and Sentry kept losing the argument.

Here is what I was actually paying for. About 12,000 errors per month across Shopify storefront JS, a couple of Vercel apps, and a few cron jobs. Most of those 12k were the same five issues, which Sentry of course grouped, but the event quota still ticked down. I was also paying a seat tax for a team of one. Performance monitoring sat unused because I never trusted the sampling.

The real problem was duplication. PostHog was already on every page for product analytics and session replay. Two SDKs, two dashboards, two billing portals, two sets of data retention rules. When something broke at 23:00 on a Friday, I would jump between the Sentry issue view and the PostHog session replay to figure out what the user was doing when the error fired. Cross-referencing the same incident across two tools is the worst kind of busywork.

I had been running PostHog Cloud on the generous free tier for a year. The tier covers 1 million events, 5,000 session recordings, and now error tracking events too. My actual usage was nowhere near the cap. So I was paying Sentry 960 EUR per year to do something a free tier could absorb without noticing.

The trigger was a quota email. Sentry told me I was approaching my error limit because a deploy had introduced a noisy `null is not an object` somewhere in the cart drawer. I fixed the bug in 20 minutes, then sat down and asked the obvious question: why am I still here. The answer was nostalgia and switching cost, both bad reasons to keep a recurring bill alive.

What PostHog Error Tracking Actually Does

PostHog Error Tracking is not a wrapper. It catches uncaught exceptions, unhandled promise rejections, and manual `captureException` calls, then groups them by fingerprint, stores stack traces, and ties each error to the same session replay and analytics events PostHog already has. That last part is the unfair advantage. When I open an issue, the session replay is right there, no UUID copy-paste required.

The grouping is good enough. Not Sentry-good, I will admit that. Sentry has years of fingerprinting heuristics and they do squeeze duplicates better. PostHog groups by stack trace shape and message, which catches 90% of cases. For the other 10%, I add a manual fingerprint hint and move on.

The init is small. Here is the relevant block from my storefront entry:


import posthog from 'posthog-js'

posthog.init(import.meta.env.VITE_POSTHOG_KEY, {
  api_host: 'https://eu.i.posthog.com',
  capture_exceptions: true,
  capture_pageview: true,
  session_recording: { maskAllInputs: true },
})

`capture_exceptions: true` is the whole feature flip. The same SDK that was already firing pageviews now also catches errors. I did not add a single byte to the bundle.

Manual capture works the way you expect:


try {
  await checkoutMutation(cart)
} catch (err) {
  posthog.captureException(err, {
    cart_id: cart.id,
    step: 'checkout_submit',
  })
  throw err
}

The second argument lands as searchable properties on the issue, so I can filter by `step = checkout_submit` in the dashboard. Alerts route through the same PostHog notification system I already use for product metrics. One Slack channel, one alert format, one place to silence noise.

The Migration Took One Evening

I budgeted a weekend. It took three hours. The reason it was fast: posthog-js was already loaded on every surface I cared about, so the work was flipping a flag, wiring source maps, swapping function calls, and turning Sentry off.

Step one was the config flip above. Deploy, wait an hour, watch errors flow in. They did, immediately. The first issue I caught was a Klaviyo embed throwing on a country code I had never seen.

Step two was source maps. PostHog ships a Vite plugin that uploads on build:


import { defineConfig } from 'vite'
import { sourcemapsPlugin } from '@posthog/vite-plugin'

export default defineConfig({
  build: { sourcemap: 'hidden' },
  plugins: [
    sourcemapsPlugin({
      project: 'raxxo-storefront',
      apiKey: process.env.POSTHOG_PERSONAL_API_KEY,
      host: 'https://eu.posthog.com',
    }),
  ],
})

`sourcemap: 'hidden'` keeps the maps off the public CDN. The plugin uploads them to PostHog at the end of the build. Upload time on my project is around 3 seconds. Stack traces in the dashboard now point at real source lines, not minified `t.js:1:42031` nonsense.

Step three was a find-and-replace. I had 38 call sites using `Sentry.captureException`. Most of them looked like this:


// before
Sentry.captureException(err, { extra: { userId } })

// after
posthog.captureException(err, { user_id: userId })

A codemod would have been clean, but the sites were spread across three repos and a Shopify theme, so I did it by hand in 40 minutes. I left the Sentry SDK installed for one week as a safety net, comparing issue counts in both dashboards. They tracked within 4%. The 4% gap was Sentry deduping more aggressively, not PostHog missing events.

Step four was decommission. I uninstalled `@sentry/browser` and `@sentry/vite-plugin`, removed the init blocks, deleted the env vars from Vercel, cancelled the Sentry subscription, and exported a year of historical issues to a JSON file in cold storage. Bundle size dropped by 38KB gzipped. The 80 EUR/month line item went to 0.

If you are running a similar consolidation, my first-party analytics stack write-up covers the same one-tool philosophy applied to traffic data.

Where PostHog Loses, And How I Cover It

I am not going to pretend this is a clean win on every axis. Sentry still has things PostHog does not, and I had to decide which of those things actually mattered to me.

Release health is the big one. Sentry tracks crash-free session rates per release, regression detection across deploys, and a proper release timeline. PostHog has releases, but the release-health view is shallow. For a one-person studio shipping a few times a week, I do not need cohort-level crash analysis. If you are running a 50-engineer org with a weekly train, this is a real tradeoff.

Advanced fingerprinting is the second gap. Sentry lets you write fingerprint rules to merge or split issue groups with surgical control. PostHog gives you a manual fingerprint hint and that is it. I have hit two cases in three months where I wanted Sentry-grade grouping. Both times I solved it by adding a custom error class with a stable `name` property, which PostHog groups by cleanly.

Server-side performance traces are the third. PostHog does have backend SDKs, but distributed tracing across services is not its strength. I run OpenTelemetry into Vercel Logs and Grafana Cloud for backend tracing. That stack is free at my volume and gives me proper span timing, which I needed for a slow Shopify webhook handler last month.

The bridge stack ends up looking like this. PostHog handles all browser errors, session replay, product analytics, and feature flags. OpenTelemetry plus Vercel Logs handles backend traces and structured logs. Cron job failures land in PostHog through a tiny wrapper that sends `posthog.captureException` from the Node side. For LLM-specific error patterns I am building on top of the eval setup in Running LLM Evals in Production, where errors and quality regressions live in the same place.

The honest summary: PostHog Error Tracking is 85% of Sentry at 0% of the cost when you already run PostHog. The missing 15% is solvable with a free tracing tier and one custom error class. For a solo studio, that math is not even close.

Bottom Line

Solo studios should not pay for two observability tools. If PostHog is already loaded for analytics or session replay, error tracking is a config flag and a Vite plugin away. The migration cost me one evening and saved 960 EUR a year on a stack that now ships fewer bytes and fewer dashboards.

I will say the obvious thing. If you have a real release-health need, or a team big enough to justify Sentry's grouping rules, stay on Sentry. If you are one person shipping a Shopify storefront and a couple of side apps, the second tool is dead weight. Cut it.

You can see the rest of how I run a one-person AI studio on minimal infrastructure over at Studio, where I keep the running list of what is loaded into the stack and what got dropped. The pattern is always the same: one tool that does 85% of the job beats two tools that each do 100%, every single time the bill comes due.

This article contains affiliate links. If you sign up through them, I may earn a small commission at no extra cost to you. (Ad)

Stay in the loop
New tools, drops, and AI experiments. No spam. Unsubscribe anytime.
Back to all articles