ListMatchGenie

Reading a report

How to interpret each section of a ListMatchGenie report, what questions each answers, and how to turn observations into action.

A report is structured so you can read it top-to-bottom and come away with a complete picture, or jump to the section that answers your specific question. This page is a reading guide.

Start with the executive summary

The Genie's opening paragraphs set the frame. Read them first — they tell you:

  • Overall match rate and whether it's typical for your data shape
  • Where the matches came from (identifier, fuzzy, phonetic)
  • Where the unmatched clustered (geography, data quality, specific subset)
  • Recommended next action (if any)

If the summary says "excellent match quality, export and go", you often don't need to read further.

Use stats cards as anchors, not analysis

The four stat cards (source rows, matched, review, unmatched) are descriptive, not prescriptive. A 65% match rate isn't good or bad by itself — it depends on what you expected. An internal customer dedup might legitimately produce 98% match rate; a cold lead match against an existing CRM might be 20%.

Calibrate against your expectation:

  • Much higher match rate than expected — are you matching too loosely? Check threshold.
  • Much lower than expected — is your master comprehensive? Are identity columns populated in both files?
  • Match rate in your expected range — no investigation needed, move on.

Read the match method breakdown like a health check

Healthy patterns for different data shapes:

  • Identifier-rich data (both files have email or NPI): should be 70%+ from exact_id or deterministic. High fuzzy % means your IDs aren't matching when they should — investigate.
  • Name-only data (no IDs, just names): most matches come from fuzzy. Expected.
  • International data: higher phonetic % than US-only data. Expected.

If the distribution surprises you, dig into why — the data-quality narrative below usually explains it.

Read the score distribution shape

Look at the shape, not the counts:

  • Bimodal (two clear peaks) — good. Matches are confidently high-scoring, non-matches are confidently zero-scoring, clean separation.
  • Smeared middle (lots of mid-range scores) — ambiguity. Your threshold is probably cutting through real matches. Worth investigating the data-quality narrative.
  • Single peak near zero — match rate will be low. Either your data has no signal, or your profile is wrong.
  • Single peak near 100 — either your data is extremely clean or your threshold is so low everything passes.

Pivots answer "where does the problem live"

Pivots show per-dimension match rate. The valuable comparison is across groups within one pivot:

  • State pivot shows 95% match rate in NY, 40% in TX → your master has thin TX coverage.
  • Industry pivot shows 80% match rate overall but 20% for "Other" → that "Other" bucket is a noise catchall.
  • Data-quality pivot shows match rate drops 50 points between "excellent" and "poor" → invest in cleaner input data.

Pivots turn abstract stats into targeted next actions.

The data quality narrative is your improvement plan

This section tells you exactly what would improve your match rate if you fixed it next time. Pay attention to:

  • Columns flagged as high-null — you may be losing signal
  • Columns with inconsistent formatting that cleansing didn't fully resolve
  • Unrecognized values that the Genie couldn't map

Each of these is an actionable item for your next run.

Key findings are the "here's what matters" summary

Genie-written, 3–5 bullets. These are the observations a careful analyst would make staring at the full report for 20 minutes. Read them even if you skip everything else.

Sample rows ground the abstract stats

The samples at the bottom show real matches, reviews, and unmatches from your data. Spot-checking 5–10 of them validates the stats — if the matched samples look wrong, your profile or threshold needs work; if the unmatched samples look obviously in the master, your master may be missing those entries.

Ask follow-up questions to go deeper

The inline Q&A lets you ask the Genie things the report didn't pre-compute:

  • "Show me the top 10 companies with the most unmatched contacts."
  • "What percentage of matches had a phone number difference?"
  • "List states with match rate below 50%."

Good follow-up questions are specific. Broad questions ("why was the match rate low") get less useful answers than targeted ones. See Follow-up questions.

When to share vs. export

  • Share the report when the recipient will read and act on it (account manager reviewing a territory, ops lead validating a data vendor).
  • Export the data when the recipient will do further processing (engineer building a pipeline, analyst doing their own cuts).

Usually both — share the report for context, attach the CSV for depth.