How GYIBB Works
We aggregate public discussion about consumer products and synthesize it into a single, sourced review. The methodology is open — you can audit any verdict back to the comments it came from.
The autonomous pipeline
Every GYIBB review is produced end-to-end by a team of specialized AI agents — no human editorial involvement, no sponsored placement, no exceptions. Here's exactly who does what.
Monitors product launches, Reddit threads, HackerNews frontpage, and ProductHunt to surface products people are actively discussing. Feeds the queue that the rest of the pipeline processes.
Harvests raw user discussion from 8 platforms simultaneously: Reddit, Trustpilot, YouTube (comments + transcripts), HackerNews, Lemmy, Stack Exchange, ProductHunt, and the brand's own site. Runs on a residential IP to reach Trustpilot and Reddit without datacenter blocks.
Feeds all harvested data into the four-layer reality model (see below) and runs the GLM-5.1 synthesis with extended thinking enabled. Produces a structured review: rating, sentiment distribution, pros/cons, tension points, and the full four-layer narrative. The rating is derived mathematically from sentiment — not generated by the LLM directly.
An independent agent that receives Quinn's draft and the raw source data — with no knowledge that Quinn produced it. Its only job: find claims in the review that aren't supported by the source evidence. Reviews that fail are rejected and re-run. Ubik also surfaces contradictions between layers (e.g. brand claims 30h battery, median user reports 14h).
Validates the review against GYIBB's data floor (minimum voice count,
platform diversity, schema integrity). Attaches affiliate links,
generates SEO metadata, and writes the review file to the site.
Emits a page.published event
that kicks off the downstream agents.
Submits new URLs to Google Search Console, monitors CTR and impressions, and flags reviews that need optimization.
Posts 3 content angles (stat, tension, verdict) to X and Bluesky. Also monitors @mentions and replies with data-backed answers.
Monitors journalist query platforms (HARO, SourceBottle) and pitches data-backed responses to earn editorial backlinks.
Watches every event on the pipeline bus (Redis Streams). Tracks cycle
health, detects stuck or failed runs, maintains the admin dashboard,
and exposes the /api/v1/cycles API
that powers the internal monitoring UI.
All agents communicate over a Redis Streams event bus. Each event is durably stored — no message is lost during restarts or deploys. The pipeline is fully observable: every step is logged, every rejection has a reason code, and the admin dashboard shows cycle state in real time.
The four-layer reality model
Every review is built from up to four independent layers. We flag where they agree, where they contradict, and where data is thin.
User reality
Comments from Reddit, HackerNews, Lemmy, Stack Exchange, ProductHunt, and YouTube — actual people describing their experience with the product.
Video reality
Long-form review videos: what independent reviewers tested, what they measured, what they concluded.
Internet reality
Aggregate ratings from publishers like Wirecutter, RTINGS, DPReview, NotebookCheck, and category-specific authorities.
Brand reality
What the manufacturer claims on their official site. Compared (not trusted) against the other three layers.
The data floor
Truth comes from sample size. A handful of opinions cannot honestly be called a "synthesis of real user reviews." We refuse to publish below a minimum:
- At least 10 user voices across all platforms combined
- At least 2 distinct platforms — single-platform reviews are echo chambers
- A real user-experience prose section ≥ 200 characters
- Category in our canonical taxonomy
- Rating not the LLM fallback sentinel (5.5)
Products that fail these checks stay in our queue and are reviewed again later. They never appear on the site as a half-baked verdict.
Confidence tiers
Even after passing the floor, we tell you how strong the signal is:
Adversarial fact-checking
Before publishing, every synthesized review is reviewed by a separate AI agent (we call it Ubik) acting as a fact-checker. Its job is to find any claim in the review that isn't supported by the source data we collected. Reviews that fail this check are rejected and re-run.
We use this same adversarial pass to surface contradictions across the four reality layers — for example, when the brand claims 30-hour battery life but the median user comment reports 14 hours.
What we do not do
- We do not use Reddit, YouTube, or any user-generated content to train AI models. Source data flows through inference at synthesis time and is then discarded.
- We do not sell, license, or share collected data with third parties.
- We do not retain content marked as
[deleted]or[removed]; deletions propagate. - We do not derive sensitive characteristics about commenters.
- We do not write reviews of products we couldn't gather enough data on. Empty section beats a confident lie.
Affiliate disclosure
GYIBB earns commissions from some links (Amazon Associates and select brand programs). The presence of an affiliate link never influences what we say about a product or how we score it — the synthesis is locked before any link is attached. We never write a review just because a product has a high commission, and our quality gates apply equally to affiliate and non-affiliate products.
Source attribution
Every claim we make traces back to a public discussion. The review page itself shows the platform breakdown and counts. If you want to audit a specific verdict, follow the source links in the user-reality section — every quote can be traced to its origin thread.
Plug GYIBB into your AI agent.
Other AI agents can query the GYIBB Truth Engine over the Model Context Protocol — for free, with no API key, with proper attribution baked into every response. Four tools available:
-
get_product_review— synthesized verdict, sources, confidence tier -
verify_claim— fact-check a claim against our sources -
compare_products— side-by-side rating + pros/cons -
search_products— keyword search over the catalog
Endpoint
https://gyibb.com/mcp Compatible with Claude Desktop, OpenClaw, and any MCP-aware agent. Streamable HTTP transport, no auth required at this stage.