
Almost every operations report I've read in the last six months has one line that looks the same.
"Inventory accuracy: 96%."
The number varies. The shape does not. A single percentage, reported monthly or quarterly, usually next to a green up-arrow if it's higher than last quarter, a red down-arrow if it's lower. It shows up in ops reviews, board decks, and customer QBRs. Nobody asks what the number means. They glance at it, they nod, and they move on.
The number is wrong. Not in the arithmetic. The arithmetic is usually fine. It's wrong in the aggregation. "Inventory accuracy" as a single KPI is three different problems collapsed into one, and the collapse is what's costing you money.
Let me break it apart.
Most operators derive inventory accuracy the same way. Count what's in the building. Compare to what the system says. Count the number of SKUs where the two numbers match (or fall within some tolerance). Divide by total SKUs. Report the ratio.
That calculation tells you one thing: at a specific moment, on a specific day, with a specific counting methodology, what fraction of your SKUs were synchronized between the physical world and the record system.
It does not tell you:
These are four different variables. A single percentage collapses them into one and loses all the signal.
Problem 1: Duration. Inventory drift over time. Your 96% was true the morning you took the count. By the time you published it in the report, you had three days of receiving and shipping activity that you hadn't counted. Real duration-adjusted accuracy, in a busy warehouse or dispensary, is often 3 to 8 percentage points lower than the snapshot number.
If you only measure accuracy monthly, you are measuring it right after a count. That is the moment of highest accuracy in the cycle. You are systematically over-reporting.
Problem 2: Direction. Overcounts and undercounts are very different kinds of bad. Overcounts in a fulfillment operation are margin leakage. You shipped more than you invoiced for. Undercounts are customer-service events and chargebacks. A 96% accuracy rate that's 70% overcounts and 30% undercounts tells a completely different story than one that's 70% undercounts and 30% overcounts, and the two need different interventions.
When both get rolled into a single accuracy number, you lose the ability to see which direction the money is flowing.
Problem 3: Concentration. 96% SKU-accuracy can mean "most of our errors are clustered in our top 20 high-velocity SKUs" or "we have a long-tail problem with 400 slow-moving SKUs that nobody touches." Those are different problems with different fixes. The first is a workflow or training problem. The second is often a data-hygiene problem. SKUs that got created, stocked once, and never reconciled again.
A single percentage treats both the same. Your ops team can hit the KPI by fixing the wrong problem.
I call this the Counting Tax because that is what it functions as. A single inventory-accuracy KPI extracts a payment from operators in the form of hidden information. It hides the information that would have let them fix the actual underlying problems.
The tax compounds. If your team is rewarded for hitting 96%, they will find the fastest path to the number. That path is almost always counting more often, more carefully, at more cost. The underlying sources of error (the receiving gap, the picker confusion, the long-tail data rot) don't get addressed. They get counted around.
Operators who run this way for years end up with entire counting teams that exist to hit the number. Nobody in that team is fixing anything. They are, literally, just counting.
Break accuracy into components. Measure each on its own. Report all of them.
1. Continuous accuracy. Not snapshot accuracy. If your physical inventory is measured continuously, by scale or scan-through or any mechanism that runs while work happens, you can report a running accuracy percentage that's always current. No "month-end correction." You know your accuracy right now.
2. Directional variance. Split overcounts and undercounts. Report them separately. You'll see immediately whether your operation leaks value out (overcounts) or in (undercounts), and you can triage accordingly.
3. Concentration. Bucket errors by SKU velocity. Top 20% of SKUs by volume, middle 60%, long-tail 20%. Report variance within each bucket. The bucket where most of your errors live is the bucket where your operational intervention needs to focus.
4. Time-to-discovery. How long between an error occurring and the system noticing? This is the single most important operational metric that nobody measures. If your average time-to-discovery is 28 days, you have 28 days of compounding risk on every transaction. If it's under an hour, your risk exposure per event is trivial.
Two shifts make this urgent.
The first is margin compression. In cannabis retail, fulfillment, industrial distribution, and healthcare supply, margins have tightened over the last three years. Shrinkage that was tolerable at 22% gross margin is intolerable at 16%. You cannot afford to be quietly bleeding inventory into the overcount column.
The second is the information asymmetry between you and your customers. Your customers increasingly have real-time inventory data from their own systems and their own audits. If they know their shipment was short (or long) before you do, you've lost the conversation. Often you've lost the relationship.
A rolled-up monthly accuracy KPI was adequate for an operator whose customers didn't know better. Your customers know better now.
You do not need to replace your systems. You need to measure what's underneath the number.
Start with one change: report accuracy in four buckets instead of one. Over the next quarter, you will see patterns you did not know existed. Specific SKUs that leak. Specific shifts that compound errors. Specific receiving windows that generate long-tail drift.
Those patterns are the fix. The single percentage never showed them to you. Now you see them.
Then you can go after the source of each, which will mean different interventions for different problems. Some will be process. Some will be training. Some will be a continuous-verification layer under the counting step. That last one is, yes, what we build. But it's a tool. The diagnosis matters more than the tool.
A specific, decomposed measurement is worth more than a confident summary. Stop letting the summary decide what your team works on.
In most operations, it's measuring the match rate between physical count and system record, at a single moment, across all SKUs treated equally. It's a useful input but a misleading single KPI.
Snapshot accuracy in the 97 to 99% range is common for well-run operations. Duration-adjusted accuracy (the number that matters) is usually 3 to 8 points lower. The better question is: what's our time-to-discovery on errors? A well-run continuous-measurement operation answers in minutes or hours, not weeks.
Not necessarily. A basic report out of your WMS or ERP, re-organized by SKU velocity bucket and separated into over/under direction, gets you 70% of the way there. Continuous-measurement software gets you the rest of the way.
It depends entirely on what the 96% is made of. 96% with even-distribution errors across a low-value SKU tail is very different from 96% where 4% of errors are concentrated in your highest-margin product. Disaggregate before comparing.
It collapses time-to-discovery from days or weeks to minutes. An error you catch in an hour costs you one transaction. An error you catch in three weeks costs you three weeks of compounding transactions.