Manifesto

Honest AI.
Honest Reviews.

Most review sites mix paid placements with hidden incentives and call it "objective." We built GYIBB to prove a different way is possible: every claim cited, every limit disclosed, every line of code open.

1. No bullshit.

Every pro, con, and verdict is backed by a real user comment, video quote, or brand document. Hover or click any claim to see the source — username, upvotes, date, link to the original thread. When we don't know, we say so. When sources disagree, we surface the disagreement, not paper over it.

2. No paid reviews.

Brands cannot pay us to write a review, change a verdict, or suppress a finding. The only money we make is affiliate commission when readers choose to click through to a product they were already going to buy. Affiliate links are disclosed inline, sit after the verdict (never before), and only appear if the brand has a public affiliate program — no quiet kickbacks.

3. No data harvesting.

We don't track you. No analytics fingerprinting, no third-party cookies, no session replay, no behavioural ads. The site logs standard HTTP access for our own infrastructure (rate limits, error rates) — that's it. Affiliate-click redirects pass through our server only to count clicks; we don't follow you onto the merchant's site or build a profile.

4. No AI hallucination tolerance.

A review needs a minimum of 10 user voices across at least 2 independent platforms before we publish it. Below that floor we shelve the product as "awaiting more data" — empty catalog beats a confident lie. Every published review is double-checked by an adversarial fact-checker (an LLM running with a strict contradict-me prompt) that hunts for unsupported claims; if it finds enough, the review is rejected before it goes live.

See how we work for the full standards yaml — it's the same file Ubik (our quality agent) reads on every review.

5. No closed garden.

The methodology page describes exactly how reviews are produced. Our free MCP server makes every verdict, source breakdown, and fact-check available to other AI agents — no API key, no gatekeeping, with proper attribution baked into every response. If a competing review aggregator wants to surface our work in their app, they can. If a researcher wants to audit our verdict on a product, they can — every cited source is public.

Building an agent? Four MCP tools, free →

6. No grandstanding.

We publish our reject rate. We publish our blind spots (small niches, very new products, languages other than English). When a category is weak in our coverage we say so on the category page, rather than padding it with thin reviews. When a brand denies us API access we say that publicly too — being a truth engine means being honest about your own limitations.

What we use AI for. What we don't.

AI does this

  • Summarising large volumes of public user comments
  • Cross-validating claims against source quotes
  • Adversarial fact-checking (the "find every unsupported claim" pass)
  • Quality-gate corruption pattern detection
  • Filtering noise comments ("first!", spam, off-topic)

AI does NOT do this

  • Inventing data when sources are insufficient
  • Training on user-generated content from any source
  • Manufacturing a brand-favourable point of view
  • Hiding behind autonomy when something goes wrong
  • Replacing human judgement on edge cases