热门产品
Recommended Reading
How does ABKE (AB客) run a “de-AI-ification” content test to compare human expert tone vs pure AI copy—using reading time and scroll depth as proof?
ABKE’s GEO “de-AI-ification” test uses an A/B experiment: (A) human expert tone with verifiable fact slices (process/standard/data source) vs (B) pure AI copy without an evidence chain. We measure GA4/Matomo Avg. engagement time (seconds) and Scroll depth (%) under controlled variables (same page layout, same traffic channel, same publish window), and report the median engagement time for each group plus the delta in seconds.
Purpose (Awareness → Interest): why test “human expert tone” vs “pure AI copy”?
In the AI-search era, B2B buyers often ask AI systems questions like “Which supplier is reliable?” or “Who can solve this technical issue?”. ABKE (AB客) treats content as evidence-backed knowledge assets that help AI systems understand and trust a company.
This “de-AI-ification” test checks whether adding verifiable knowledge slices (facts, standards, data sources) increases measurable user engagement compared with generic AI-generated copy.
Test design (Evaluation): what exactly is A/B tested?
Variant A — Human expert tone
- Includes verifiable fact slices: process steps, applicable standards, and data source references.
- Structure supports technical decision-making: assumptions → method → measurable outcome.
- Goal: increase trust signals and reduce evaluation friction.
Variant B — Pure AI copy
- Does not include an evidence chain: lacks checkable facts, standards, or data sources.
- Typical risk: vague claims that cannot be validated by a buyer or by an AI knowledge graph.
- Goal: serve as a baseline for measuring the lift from “evidence-based” writing.
Key principle: Only the text content differs. All other variables are controlled to isolate the effect of “expert + evidence” vs “generic AI”.
Metrics & tooling (Evaluation): what is measured and how?
- Avg. engagement time (seconds) — captured via GA4 or Matomo (unit: s).
- Scroll depth (%) — captured via GA4 or Matomo (unit: %).
Controlled variables (to ensure comparability)
- Same page structure: identical layout modules, headings, CTA positions, and media blocks.
- Same traffic channel: e.g., identical UTM source/medium or the same referral placement.
- Same publish window: identical time range to reduce seasonality and campaign interference.
Result format (Decision): what outputs do buyers and stakeholders get?
ABKE reports outcomes in a format that can be audited and compared across pages and campaigns:
- Median engagement time for Variant A (seconds).
- Median engagement time for Variant B (seconds).
- Delta between medians (seconds):
Median(A) − Median(B). - Scroll depth (%) distribution comparison as a supporting engagement indicator.
Using the median reduces the influence of extreme sessions (e.g., accidental long idle tabs).
Operational notes & boundaries (Purchase → Loyalty): what this test can and cannot prove
What it supports
- Whether evidence-backed writing increases measurable on-page engagement (seconds, %).
- Whether readers consume more depth when content contains checkable technical slices.
What it does not guarantee
- It does not by itself prove AI recommendation ranking improvements in any specific model.
- It does not replace lead-to-contract tracking; it is an engagement-layer validation.
In ABKE’s GEO delivery, this test is typically used as a content quality gate before scaling distribution into the global publishing network.
.png?x-oss-process=image/resize,h_100,m_lfit/format,webp)
.png?x-oss-process=image/resize,m_lfit,w_200/format,webp)











