← Back to blog
Guides April 24, 2026

How to Test GitHub Webhooks Locally Without Smee.io

Testing GitHub webhooks during local development has the same problem every webhook system has. Your app is running on localhost, and GitHub needs a public URL to reach it. GitHub's own answer is smee.io: a lightweight proxy built by the Probot team that streams webhook deliveries to a client running on your machine over Server-Sent Events.

Smee is fine. It works, it's free, and it needs no configuration. But it's not the only option, and for a lot of workflows it's not the best one. This guide covers three ways to test GitHub webhooks locally without touching smee.io, plus a bonus tip using GitHub's own redelivery feature.

TL;DR

Most developers end up using a combination.

Why smee.io might not be the right fit

Smee.io is purpose-built for developing GitHub Apps with Probot. Inside that specific workflow it's excellent. Outside it, a few things start to feel limiting:

None of these are dealbreakers for the GitHub App use case it was built for. But if you're doing anything more general, there are better tools for the job.

Option 1: Use a tunnel (ngrok, Cloudflare Tunnel)

The simplest drop-in replacement. Run a tunnel, get a public URL, paste it into your repository's webhook settings.

With ngrok:

ngrok http 8000

You get a URL like https://a1b2c3d4.ngrok-free.app. Configure that as your webhook URL in Settings → Webhooks → Add webhook, set the content type to application/json, and add a secret if you want signature verification (you should).

The good: Real GitHub events hit your actual local code. Real signature verification. You can step through your handler in a debugger. For end-to-end testing, nothing beats it.

The annoying: Free ngrok URLs rotate on restart, which means updating your webhook settings every session. Free tier has a browser interstitial that can interfere with delivery. Paid plans start at $8/month for stable subdomains.

If you're already in the Cloudflare ecosystem, cloudflared tunnel --url http://localhost:8000 does the same job. Named tunnels give you a stable subdomain without paying for a reserved domain.

Other options worth knowing: localhost.run needs no installation (ssh -R 80:localhost:8000 nokey@localhost.run), and npx localtunnel --port 8000 works without an account.

When to use this: You need to test the full flow with real GitHub events. Receiving, signature verification, idempotency handling, the lot. It's the most realistic option and the closest thing to production.

Option 2: Craft local webhook payloads yourself

Skip the network entirely and POST directly to your local endpoint. This gives you total control over the payload, which is useful when you want to test edge cases that are awkward to trigger through GitHub itself (a 500-commit push, a PR from a forked repo with strange permissions, a ping from a specific user).

curl -X POST http://localhost:8000/github/webhook \
  -H "Content-Type: application/json" \
  -H "X-GitHub-Event: push" \
  -H "X-GitHub-Delivery: 72d3162e-cc78-11e3-81ab-4c9367dc0958" \
  -d '{
    "ref": "refs/heads/main",
    "repository": { "name": "test-repo", "full_name": "you/test-repo" },
    "pusher": { "name": "you", "email": "you@example.com" }
  }'

The problem: if your handler verifies the X-Hub-Signature-256 header (and it should), this raw request will fail. GitHub computes the signature over the exact raw body bytes using your webhook secret, and there's no way to fake a valid signature without the secret.

This is why crafted payloads work best inside your test suite rather than as ad-hoc curl commands. A Jest or PHPUnit test can build a payload, generate a valid HMAC-SHA256 signature using the test secret, and send it through the same code path as a real webhook.

GitHub's signature is an HMAC-SHA256 of the raw payload, prefixed with sha256=:

// In a test, generate a valid GitHub webhook signature
$secret = config('services.github.webhook_secret');
$payload = json_encode($eventData);
$signature = 'sha256=' . hash_hmac('sha256', $payload, $secret);

$response = $this->call('POST', '/github/webhook',
    [], [], [],
    [
        'HTTP_X-GitHub-Event' => 'push',
        'HTTP_X-GitHub-Delivery' => (string) Str::uuid(),
        'HTTP_X-Hub-Signature-256' => $signature,
        'CONTENT_TYPE' => 'application/json',
    ],
    $payload
);

A couple of things that will bite you here:

  1. Hash the raw body, not the parsed body. If your framework re-serialises JSON before you verify, the signature will fail silently in production (different key order, different whitespace, different signature).
  2. Use a constant-time comparison (hash_equals in PHP, crypto.timingSafeEqual in Node) when comparing signatures. Never use == or ===. This is genuinely important, not boilerplate advice.

The raw-vs-parsed body issue bites across every webhook provider. If you're also working with Stripe, the same mistake shows up there and the fix is identical.

When to use this: Automated tests, CI, fast iteration on handler logic without the overhead of going through GitHub each time. This pattern is how most mature GitHub integrations exercise their webhook handlers in test suites.

Option 3: Capture real webhooks, replay them later

This is the approach that changes how you develop, not just how you test.

Instead of pointing GitHub at your local server (via a tunnel) or generating fake payloads (via tests), you point GitHub at an inspection tool that captures and stores the raw request. Then you replay that exact request to your local server whenever you're ready.

The workflow:

  1. Create an endpoint on an inspection tool. You get a public URL.
  2. Paste that URL into your repository's webhook settings.
  3. Do whatever triggers the event (push a commit, open a PR, file an issue).
  4. The tool captures the full request. Method, headers, body, everything.
  5. Inspect the payload to understand exactly what GitHub sent.
  6. Replay it to http://localhost:8000/github/webhook when you're ready.

Why this matters: You're working with real payloads from real GitHub events, but you're not under time pressure. Replay the same push event 20 times while debugging your handler. Replay it after changing your code without triggering a new commit in a real repo. And the request history persists, so you can come back to a tricky payload tomorrow morning.

This is particularly useful when the event is hard to reproduce on demand. Branch protection bypass events, issue transfers between repos, releases with specific prerelease states. Reconstructing those by hand is painful. Capturing one real example and replaying it is much faster.

It also scales. If you're building a tool that receives webhooks from GitHub and Stripe and a Jira integration, a capture-and-replay setup covers all of them with the same workflow. Smee can't do that.

Tools that support this workflow include WebhookHub, Webhook.site, Beeceptor, and Hookdeck. They differ in persistence, replay capabilities, and how many endpoints you can manage, but the core workflow is the same.

Bonus: GitHub's built-in redelivery

This is the feature most developers forget exists, and it's already in your repository settings.

Go to Settings → Webhooks → your webhook → Recent Deliveries. You'll see every webhook GitHub has tried to send you in the last 30 days or so. Click any of them to see the full request and response. Click Redeliver to re-send it.

This is genuinely useful in two scenarios:

  1. Post-mortem debugging. Your handler threw a 500 last Tuesday. You fixed the bug. You want to confirm the fix works against the actual event that caused the problem. Redeliver it.
  2. Exploratory development. You made a change to your handler and want to test it against a real event without waiting for something to happen organically.

It's not a replacement for local testing. Redeliver still sends to whatever public URL you have configured, so you need a tunnel or capture tool on the other end. But it's a free way to replay production events that most developers overlook.

Which approach should you actually use?

It depends on what you're testing:

Scenario Best approach
"Does my endpoint work at all?" Tunnel + trigger a real event
"I'm building against multiple providers" Capture and replay
"I need fast, repeatable tests in CI" Crafted payloads with generated signatures
"I'm debugging a specific event from last week" Capture and replay, or GitHub's Redeliver button
"I'm developing a GitHub App with Probot" Honestly? Smee is fine for this

Most developers end up mixing several. Tunnels for end-to-end testing, crafted payloads for automated tests, and capture-and-replay for debugging specific issues or working across providers.

Three mistakes that will cost you hours

1. Using the wrong signature header

GitHub sends two signature headers: X-Hub-Signature (SHA-1) and X-Hub-Signature-256 (SHA-256). GitHub documentation is explicit that SHA-1 is legacy and only included for backward compatibility. Always verify against X-Hub-Signature-256.

If you're reading tutorials from before 2020, you'll see code that uses the SHA-1 header. Don't copy it. Modernise the code to SHA-256.

2. Hashing the parsed body instead of the raw body

This is the most common webhook signature bug across every provider, and it's a nasty one. The signature is computed over the exact bytes GitHub sent. If your framework parses and re-serialises the JSON before you verify, the signature check fails. And if it doesn't fail in development (because the parser happens to preserve key order) it'll fail in production when a payload comes through with a different structure.

In Express, define the webhook route before express.json(), using express.raw() instead:

// This route must come BEFORE app.use(express.json())
app.post('/github/webhook',
  express.raw({ type: 'application/json' }),
  handleWebhook
);

In Laravel, read the body via $request->getContent(), not $request->all() or $request->json(). The same principle applies everywhere: the signature is over the raw bytes, so that's what you need to hash.

3. Using == to compare signatures

Don't do this:

// Vulnerable to timing attacks
if ($received === $computed) { /* ... */ }

Do this:

// Constant-time comparison
if (hash_equals($computed, $received)) { /* ... */ }

In Node, the equivalent is crypto.timingSafeEqual. In Python, hmac.compare_digest. Every modern language has one.

The vulnerability here is subtle: a naive string comparison returns faster when the first characters match, which leaks information about the expected value one byte at a time. In practice, exploiting this against a well-connected server is hard. But it's a five-character fix, and leaving it in signals that the rest of your security review was equally relaxed.

Wrapping up

Smee.io is a great tool for the problem it was built for: developing GitHub Apps with Probot. For everything else, you've got better options depending on what you're actually trying to do.

Tunnels get you real events hitting real code. Crafted payloads get you fast, deterministic tests. Capture-and-replay gets you persistent history, replay on demand, and a workflow that scales across every webhook provider you work with, not just GitHub.

The common thread across all of these: shorten the feedback loop between "GitHub fires an event" and "I can see what my code does with it." The faster that loop gets, the fewer hours you lose to debugging silent failures.

If you're testing webhooks regularly, from GitHub or anywhere else, WebhookHub gives you persistent request history, real-time streaming, and replay on the free tier. Built for exactly this kind of workflow.

Stop debugging blind.

Get your first endpoint in 30 seconds. No credit card. No setup. Just clarity.

Start for free