How GYIBB Works

We aggregate public discussion about consumer products and synthesize it into a single, sourced review. The methodology is open — you can audit any verdict back to the comments it came from.

The autonomous pipeline

Every GYIBB review is produced end-to-end by a team of specialized AI agents — no human editorial involvement, no sponsored placement, no exceptions. Here's exactly who does what.

H
Hunter Node 0 · Trend Discovery

Monitors product launches, Reddit threads, HackerNews frontpage, and ProductHunt to surface products people are actively discussing. Feeds the queue that the rest of the pipeline processes.

S
Scout Node 1 · Data Harvester

Harvests raw user discussion from 8 platforms simultaneously: Reddit, Trustpilot, YouTube (comments + transcripts), HackerNews, Lemmy, Stack Exchange, ProductHunt, and the brand's own site. Runs on a residential IP to reach Trustpilot and Reddit without datacenter blocks.

Q
Quinn Node 2 · Synthesis Engine

Feeds all harvested data into the four-layer reality model (see below) and runs the GLM-5.1 synthesis with extended thinking enabled. Produces a structured review: rating, sentiment distribution, pros/cons, tension points, and the full four-layer narrative. The rating is derived mathematically from sentiment — not generated by the LLM directly.

U
Ubik Adversarial Fact-Checker

An independent agent that receives Quinn's draft and the raw source data — with no knowledge that Quinn produced it. Its only job: find claims in the review that aren't supported by the source evidence. Reviews that fail are rejected and re-run. Ubik also surfaces contradictions between layers (e.g. brand claims 30h battery, median user reports 14h).

E
Ella Node 3 · Publishing Director

Validates the review against GYIBB's data floor (minimum voice count, platform diversity, schema integrity). Attaches affiliate links, generates SEO metadata, and writes the review file to the site. Emits a page.published event that kicks off the downstream agents.

▼▼ (parallel)
C
Conley
Node 4 · SEO Analyst

Submits new URLs to Google Search Console, monitors CTR and impressions, and flags reviews that need optimization.

V
Vera
Node 7 · Social Voice

Posts 3 content angles (stat, tension, verdict) to X and Bluesky. Also monitors @mentions and replies with data-backed answers.

R
Rex
Node 8 · PR Agent

Monitors journalist query platforms (HARO, SourceBottle) and pitches data-backed responses to earn editorial backlinks.

▼ (always watching)
Sh
Shepherd Node 6 · Pipeline Observer

Watches every event on the pipeline bus (Redis Streams). Tracks cycle health, detects stuck or failed runs, maintains the admin dashboard, and exposes the /api/v1/cycles API that powers the internal monitoring UI.

All agents communicate over a Redis Streams event bus. Each event is durably stored — no message is lost during restarts or deploys. The pipeline is fully observable: every step is logged, every rejection has a reason code, and the admin dashboard shows cycle state in real time.

The four-layer reality model

Every review is built from up to four independent layers. We flag where they agree, where they contradict, and where data is thin.

User reality

Comments from Reddit, HackerNews, Lemmy, Stack Exchange, ProductHunt, and YouTube — actual people describing their experience with the product.

Video reality

Long-form review videos: what independent reviewers tested, what they measured, what they concluded.

Internet reality

Aggregate ratings from publishers like Wirecutter, RTINGS, DPReview, NotebookCheck, and category-specific authorities.

Brand reality

What the manufacturer claims on their official site. Compared (not trusted) against the other three layers.

The data floor

Truth comes from sample size. A handful of opinions cannot honestly be called a "synthesis of real user reviews." We refuse to publish below a minimum:

Products that fail these checks stay in our queue and are reviewed again later. They never appear on the site as a half-baked verdict.

Confidence tiers

Even after passing the floor, we tell you how strong the signal is:

Limited data
10–29 user voices. Verdict based on early signals; check back as more discussion accrues.
Solid
30–79 user voices. Verdict synthesized from substantial user feedback.
High confidence
80+ user voices across multiple platforms. Verdict drawn from a large, diverse pool.

Adversarial fact-checking

Before publishing, every synthesized review is reviewed by a separate AI agent (we call it Ubik) acting as a fact-checker. Its job is to find any claim in the review that isn't supported by the source data we collected. Reviews that fail this check are rejected and re-run.

We use this same adversarial pass to surface contradictions across the four reality layers — for example, when the brand claims 30-hour battery life but the median user comment reports 14 hours.

What we do not do

Affiliate disclosure

GYIBB earns commissions from some links (Amazon Associates and select brand programs). The presence of an affiliate link never influences what we say about a product or how we score it — the synthesis is locked before any link is attached. We never write a review just because a product has a high commission, and our quality gates apply equally to affiliate and non-affiliate products.

Source attribution

Every claim we make traces back to a public discussion. The review page itself shows the platform breakdown and counts. If you want to audit a specific verdict, follow the source links in the user-reality section — every quote can be traced to its origin thread.

Free MCP server

Plug GYIBB into your AI agent.

Other AI agents can query the GYIBB Truth Engine over the Model Context Protocol — for free, with no API key, with proper attribution baked into every response. Four tools available:

Endpoint

https://gyibb.com/mcp

Compatible with Claude Desktop, OpenClaw, and any MCP-aware agent. Streamable HTTP transport, no auth required at this stage.