A disaster prevention document platform for one of Japan's largest municipalities — an approx. ¥60M, 3-year contract. The question was which technology to use for "similarity search" across thousands of pages. Too small to justify hiring a dedicated architect, but too complex for the internal team to decide alone. It was exactly this gap that Mizunara Co., Ltd. filled with Leach's Generative AI Advisor. We spoke with CTO Masayuki Oda and CEO Yuma Kinoshita.
"A person who'll serve as a technical advisor on an hourly-charge basis — like a lawyer — almost doesn't exist in the market."
── Masayuki Oda, CTO, Mizunara Co., Ltd.
*The disaster prevention project discussed in this article includes work performed under Mizunara's predecessor organization. The approx. ¥60M figure refers to the overall scope (including infrastructure), not Leach's advisory or maintenance fees. For confidentiality, the client's name and certain specific details have been withheld.
Engagement Summary
For details on the service featured in this case, see Leach Generative AI Advisor.
Company Profile
| Company | Mizunara Co., Ltd. |
| Distinction | University-originated venture officially recognized by Advanced Institute of Industrial Technology (AIIT), Tokyo Metropolitan University |
| Main initiatives | "Wordless" (large-scale document management) and "OffiStill" (office tools for the AI era) |
| Team | 7 members (2 full-time) |
| Focus of this case | Similarity search design consulting for a ~¥60M disaster prevention document platform |
| Web | https://www.mizunara.io/ |
Interviewees:
CTO, Mizunara Co., Ltd.
CEO, Mizunara Co., Ltd.
1. About Mizunara — Building foundations for handling large-scale documents
Tominaga: First, could you tell us about Mizunara's business?
Mr. Oda: Mizunara works on the theme of how to manage, edit, and publish large-scale documents more easily. In-house, we're researching and developing "Wordless" for large-document management and "OffiStill," an office tool for the AI era.
Real-world business documents don't work in plain Markdown alone. You need tables, figures, citations — a system that can handle them at a "production-grade granularity." This project was very much an extension of that theme.
Mr. Oda: We're a small team running both contract work and in-house product R&D in parallel. That's exactly why we emphasize not taking everything on ourselves, but bringing in strong external people for the critical pieces. Asking Leach to help was squarely in that spirit.
2. Challenges before adoption — Similarity search was needed, but specs and budget constraints left the technology choice undecided
The target was a disaster prevention document platform for one of Japan's largest municipalities. A ~¥60M, 3-year contract to ingest large volumes of PDFs and images via OCR, and make them searchable and viewable on the web. The existing database was MySQL, with full-text search also handled by MySQL.
Tominaga: What was the hardest part?
Mr. Oda: The biggest issue was that we had the requirement — "find similar passages" — but we couldn't decide which technology to choose within the team alone. The existing MySQL full-text search had clear expressiveness limits. Yet what should come next wasn't obvious.
When we first reached out, honestly, it was a pretty fuzzy request: "we'd like a way to find similar documents." Mr. Tominaga responded, "then vector search is a candidate." On our own, we hadn't reached that option.
The "100% achievement rate" constraint unique to public-sector projects
Mr. Oda: If you only look at the search feature, there are plenty of choices. But in public-sector work, "if it's written in the spec, it must actually run, and the achievement rate cannot slip" is absolute. Even if a better idea comes up mid-project, doing something not in the spec breaks the Gantt chart, lowers the achievement rate, and the subsidy can be discontinued.
So we build the Gantt chart with plenty of buffer, prioritizing hitting ~100% on annual achievement rates above all. Within that constraint, we had to introduce a new element — similarity search. It came down to: could we pick not "the best tech" but "tech that fits the constraints"?
The three points Mizunara was wrestling with at the time:
- They knew they wanted "find similar documents," but didn't know which technology would deliver it. They hadn't arrived at vector search on their own; Leach proposed the semantic-search direction.
- Not just search quality — operational load, budget ceiling, and accountability all had to hold together at once. With no dedicated infra person, a Serverless-based architecture that could run without strain was preferred.
- The spec and the schedule couldn't be compromised. Any new element had to fit a design that wouldn't hurt the achievement rate or the delivery date.
3. Why Leach — They didn't want a hire; they wanted a few dozen hours of design help
Tominaga: What made you decide to go outside for help?
Mr. Oda: More than the implementation itself, we wanted a sounding board for design and architecture. Not enough demand to hire full-time. But if even a few dozen hours of a capable architect could come in, we felt the whole project would move forward in one step.
I knew Mr. Tominaga through our AIIT (Advanced Institute of Industrial Technology) connection — his reputation as an architect was strong, and his AWS expertise ran deep. He holds all 12 AWS certifications. At the time we consulted him, generative AI wasn't as developed as today; even searching the web for infra best practices had its limits. As someone who could fill that gap, he was essentially our only option.
Mr. Oda: What genuinely helped, once we started, was that he didn't just hand us a technical "answer" — he laid out the axes for how to choose, alongside the answer itself. His Slack responses were fast, so decision items never piled up.
The other big factor was fitting within the project's cost envelope. Even a ~¥60M project can't realistically hire a dedicated architect. That's exactly why "order design help only, for only the hours you need" was the right shape.
What Mizunara wanted was not to hand off the entire build. They needed help with a specifically scoped set of roles:
- Organize the comparison axes for tech selection, tuned to the actual constraints
- Estimate the cost and operational load up front, before choosing
- Surface the pitfalls ("trap points") you tend to hit during implementation, ahead of time
That "hand off design only" need lined up precisely with Leach's Generative AI Advisor service.
4. What Leach provided — 7 candidates compared, "trap points" flagged in advance
Leach's primary contribution was selecting the similarity-search approach. Starting from the fuzzy "we want to find similar documents" request, Leach first proposed vector search. Then seven candidates were laid out and compared on three axes: cost, operational load, and fit to requirements.
| Candidate | Positioning | Decision point |
|---|---|---|
| Improve MySQL full-text search | Incremental extension of the existing stack | Low onboarding cost, but clear expressiveness limits for similarity search |
| Aurora PostgreSQL + pgVector | Leading candidate | Good fit with the existing Serverless v2 stack; low migration cost |
| RDS for PostgreSQL + pgVector | Leading candidate | Comparable to Aurora on operational simplicity and cost balance |
| OpenSearch Serverless | Dedicated search engine | Strong for very large data volumes, but operationally heavy for this scope |
| MemoryDB for Redis | Adjacent candidate | Has vector capabilities, but lower alignment with this specific goal |
| DocumentDB | Adjacent candidate | Ruled out after reviewing fit and extensibility |
| Neptune | Adjacent candidate | Strong for graph use cases; overkill for this scenario |
Mr. Oda: What I valued was that each option came with a cost estimate at our actual scale. There was a hard ceiling on server costs — cross it and we were done. With no dedicated infra person, being able to factor in operational weight as well was huge.
OpenSearch was an option, but we didn't need something that heavy this time. The value was choosing "the tech that fits these constraints" rather than "the strongest tech."
PostgreSQL + pgVector emerged as the leading direction. Further, Aurora PostgreSQL and RDS for PostgreSQL were laid out side-by-side, with the criteria for choosing between them. In the end, given that the existing MySQL ran on Aurora Serverless v2, plus cost and migration considerations, the decision landed on leading with Aurora Serverless v2.
"As long as the design and the trap points are clear in advance, in today's world the implementation gets done. Having that laid out for us was a huge help."
— Masayuki Oda, CTO, Mizunara Co., Ltd.
Mr. Oda: Honestly, the delivered PDF felt more than sufficient on its own. We had set an hour cap, but we didn't feel any need to burn through the remaining time. It felt more like, "I'd feel bad asking for more."
Beyond the comparison table, Leach also shared, in advance, the specific points where implementation tends to stumble: when and how to install pgVector, initialization order, differences between Docker and Aurora environments, and CloudFront cache operations. It wasn't mere tech research — it was support that "looked ahead to where you'd actually trip."
5. One year on — Search is in daily use; maintenance overhead is near zero
pgVector install pitfalls were sidestepped in advance
Mr. Oda: When we actually built it, validation and rollout went smoothly. What particularly helped was the pgVector install and initialization traps. "When and how to install in Docker vs. Aurora" is easy to get stuck on, but because we'd talked it through beforehand, we didn't.
Manual CloudFront operations were folded into CI/CD
The support wasn't limited to the search backend. A proposal was also made to improve CloudFront cache invalidation after deploys.
Before
After every deploy, engineers logged into the CloudFront console and manually clicked the cache-invalidation button
After
An Invalidate stage was added after the Deploy stage in CodePipeline, running aws cloudfront create-invalidation automatically
Mr. Oda: For things like this, what helped wasn't "can we implement it" but "what's the natural way to wire it in" — and we had someone to discuss that with. Not just search; it felt like we'd gained a sounding board for the whole architecture.
A year in, people search just as much as they need
Mr. Oda: Similarity search isn't a flashy feature, but it delivers cleanly on the need to find the right passage in thousands of pages at a natural granularity. Editors and reviewers search when they need to. It's a grounded use pattern.
Roughly a year in, operations are quiet — maintenance overhead is essentially nil. Document update frequency isn't high, so it's in the "quietly keeps running" phase.
Facing a similar design-decision moment?
"Not worth a hire, but too heavy to decide alone" — it's a frequent situation on small and mid-size development teams. Leach Generative AI Advisor starts at ¥50,000/month, with a 1-month trial to begin.
6. Even in the age of generative AI, decision-making still needs a sounding board
What stood out in this interview was that Mizunara valued Leach not as an "implementation contractor," but as a conversation partner who could narrow the options down given the constraints.
Mr. Oda: Back when we consulted, generative AI wasn't as developed as today. Gathering best practices from the web was our best bet, and the part about judging them against our own constraints had to rely on people.
Now, generative AI has advanced and the number of technical options has, if anything, increased. But the more options you have, the harder it is to choose. Taking AI's answers at face value doesn't guarantee they'll fit the constraints of a live project. Knowing best practices isn't enough — you need someone who can pick up the constraints of a project that's already in motion and make a judgment call. That part still isn't easy for AI alone.
Mr. Kinoshita: Even when people are told "solve it with AI," many of them end up stuck right there. AI surfaces candidates, but to decide which to pick, you still need someone to talk it through with.
Deep domain knowledge and subtle constraints especially aren't well represented on the open web. The value of someone who can pick up the right context and translate it into decisions that fit the field doesn't fade as AI gets smarter — that's how I feel.
Mr. Oda: To begin with, someone you can retain as a technical advisor by the hour barely exists in the market. Consult only for the period you need, only on the topics you need — especially for small and mid-size development teams, it's a realistic option.
You get the knowledge of a highly capable expert at a cost lower than hiring. And with a ¥50,000/month starting price and short-term terms, the pricing is easy to wrap your head around. As a business model, it's genuinely solid.
7. Looking ahead — Continuing the relationship, taking on the next challenge
Mr. Oda: This project isn't the end. We'd like to keep the relationship healthy and work together whenever a new question comes up. As generative AI continues to evolve and options multiply, a sounding board for design matters even more.
Mr. Kinoshita: The connection with Mr. Tominaga also runs through NAIST (Nara Institute of Science and Technology). Engineers from NAIST are genuinely excellent across the board, and the network keeps opening up new connections.
Mr. Oda: Once we were all out in the working world, I came to appreciate that caliber all over again. The quality of this engagement feels like a natural extension of that.
Mr. Oda: For anyone wrestling with technical challenges, just reaching out is a good first step. For people who understand the value of bouncing ideas off a capable architect, this service really lands. Lighter than hiring, but still able to engage deeply when it matters. That balance worked for us.
Even in the age of generative AI, what moves projects forward isn't "plausible-sounding answers" — it's decisions that fit the constraints. Mizunara's case shows that you don't need to outsource the implementation; simply supplementing the design and decision-making with a strong outside expert can meaningfully change the outcome.
Editor's note — Why this price and this contract shape
Frankly: Leach doesn't aim to earn big from the Generative AI Advisor service alone. Typical technical consulting runs ¥150,000–500,000 per month on 6- to 12-month contracts. Leach is set up at ¥50,000/month, a 1-month trial, and a 3-month minimum engagement — among the lowest in the industry, with the shortest terms. The design is deliberate: to meet the market gap where "a lawyer-style, hourly-charge technical advisory service barely exists", in the most accessible form possible.
When a project reaches the point of "we'd like to hand this through implementation too," we take that on as generative AI contract development or app production, implemented by the same team straight through — and that's the core of Leach's revenue. Advisor is the "front door"; contract development, once trust is built, is the "main body." A two-stage model.
In the interview, CTO Mr. Oda also affirmed this structure, calling it "low monthly price, short-term terms, and a strategy that leads to project work — the design is solid." Use the advisory on its own if that's all you need; hand implementation to us when you need it. Try small first, go deep when required — that's how the service is designed.
A technical advisor you can retain by the hour — like a lawyer
From ¥50,000/month · 1-month trial available · 3-month minimum engagement · Real-time Slack support · Implementation & contract development available when needed
See Leach Generative AI Advisor →
For specific inquiries, please use the contact form

