Context
One of my favorite kinds of product work is when the system starts compounding knowledge instead of discarding it.
That was the opportunity here. Truck and dray pricing often lived in email threads, spreadsheets, old quote files, and people’s memory. Teams would do the work to figure out a usable rate, send the quote, and then effectively lose that effort as soon as the job moved on. A month later, someone would solve the same problem again from scratch.
The quote engine already had the right raw material. The missing piece was turning generated quotes into a searchable operational asset.
Problem
The workflow had several built-in inefficiencies:
- generated quote data was rich enough to be useful later, but it disappeared into artifacts
- truck pricing varied across quote shapes and lane structures
- future users needed searchability, not just archival storage
- duplicate or outdated entries could quickly pollute reuse
- import back into new quotes had to be practical, not theoretical
This was less about one feature and more about closing a knowledge loop.
Constraints
The constraints were very real:
- quote data existed in more than one structural shape
- pickup and delivery context could be represented differently across flows
- reuse had to be fast enough for live quoting work
- stored rates needed enough provenance to be trusted later
- the system had to avoid filling the matrix with near-duplicates and noise
If the matrix became cluttered or hard to search, people would go back to inbox archaeology immediately.
What I Built
I treated the truck rate matrix as a derived data product, not a manual side table.
First, I normalized rate information out of generated quotes. Instead of relying on someone to re-enter usable pricing after a quote was sent, I extracted relevant truck-rate data directly from the quote-generation flow and transformed it into a consistent schema.
Second, I designed the schema around future search and import, not just storage. A rate matrix only matters if the next person can find a likely match quickly. That meant preserving fields like pickup location, ZIP context, door or city details, equipment assumptions, and pricing-relevant structure in a way the UI could query efficiently.
Third, I added dedupe and edit discipline. Not every quote deserved a brand-new permanent record. The system needed to recognize when a newly generated rate was effectively the same as an existing one, or when the right behavior was to update or enrich prior knowledge rather than create a fresh duplicate.
Fourth, I gave users a dedicated way to inspect and reuse the data. Search, edit, and re-import flows turned the matrix into a real operational tool. Without those surfaces, the backend work would have been technically correct and practically unused.
Finally, I kept the feedback loop close to the live quote workflow. The strongest part of the design was not just the data model. It was the fact that the system captured knowledge where the work already happened, instead of asking teams to maintain a second process.
Validation
Validation focused on whether the matrix could survive production use:
- could different quote shapes normalize into one usable rate model?
- could users find prior rates using the search terms they naturally reached for?
- did re-import into new quotes preserve enough context to be helpful?
- were duplicate records controlled well enough to keep trust high?
I also cared about behavioral validation. The point was not simply to prove the matrix existed. The point was to make it easier for teams to reuse prior work than to ignore the feature.
Outcome
The result was a stronger operational memory system.
- truck-rate knowledge stopped disappearing after quote generation
- users could search earlier work instead of reconstructing it manually
- the quoting platform gained a compounding advantage over time
- pricing reuse became part of the product, not just a heroic habit
This is one of those pages I think translates well outside freight. Many internal tools generate valuable structured knowledge and then throw it away because nobody closes the loop. Turning outputs into reusable inputs is a broadly useful systems pattern.
Lessons
Data reuse only works when it is built into the workflow where knowledge is created.
Teams almost never maintain side systems consistently unless the value is immediate and the extra work is tiny. Capturing truck-rate knowledge after quote generation worked because it piggybacked on work users were already doing. Search and import mattered because they created a fast payoff later.
That is how internal software becomes compounding infrastructure instead of another form everyone resents.
If you have operational knowledge trapped in exports, inboxes, or one-off artifacts, I enjoy turning that into something reusable and searchable. Let’s talk .