← Back to Blogs

From Spreadsheets to Structured Claims: Building a GraphQL Backend for Healthcare Reimbursements

By Prithvi Kumar . April 15, 2026 . 5 min read

healthcarereimbursementsGraphQLbackendReimbursement
From Spreadsheets to Structured Claims: Building a GraphQL Backend for Healthcare Reimbursements

Introduction

Healthcare reimbursement workflows are messy in the real world. Data comes from different people, at different times, in different formats, and usually under time pressure. One missed phone number, one delayed document, or one mismatched policy field can stall the entire claim lifecycle.

That’s the problem this backend tackles: turning a fragile, email and sheet driven process into a structured, API driven system where patient records, reimbursement details, policy info, settlement metadata, and document handling all live in one consistent backend.

The project is medfocus_backend, and it’s built as a GraphQL API over a MySQL domain model focused on operational reimbursement workflows.

Project Overview

At a high level, this service acts as a backend operating layer for reimbursement operations. It centralizes authentication, patient onboarding, claim related entities, and document upload/download flows.

What it does

  • Handles superadmin and client login/signup flows

  • Manages patient and reimbursement lifecycle data

  • Stores related policy, bank, settlement, journey, and remark records

  • Exposes file upload/download via pre signed URLs

  • Uses relational integrity through Sequelize model associations

Key capabilities

  • Single GraphQL surface for all operations

  • JWT based protected mutations/queries in core modules

  • Transactional writes for multi table operations (e.g. client creation across login + client tables)

  • Schema modularization by domain (patient, reimbursment, policy, settlement, etc.)

Tech Stack Deep Dive

Node.js + Apollo Server + GraphQL

GraphQL is a good fit here because reimbursement data is relational and frontends usually need selective reads: a little patient info, plus policy bits, plus latest status, plus remarks. A REST first design would likely either overfetch or add endpoint sprawl.

Why this choice worked

  • Flexible querying for UI workflows that evolve quickly

  • Central schema as contract for multiple data domains

  • Easy module-by-module expansion using typeDef + resolvers

Trade Off

  • Resolver complexity can creep up without strong service layer boundaries

  • Auth consistency needs discipline at resolver level


Sequelize + MySQL

This codebase leans heavily on Sequelize for model definition, associations, and transactional writes. For a business domain with many related entities and predictable tables, this is pragmatic and productive.

Why this choice worked

  • Fast model iteration for business heavy data

  • Built in transactions and association support

  • Familiar SQL storage for reporting and operational support

Trade Off

  • ORM abstractions can hide query inefficiencies

  • Requires clear migration strategy (especially when sync() and migrations coexist)


AWS S3 Signed URLs

The backend does not proxy file binaries. Instead, it generates signed URL's and lets clients upload/download directly from object storage.

Why this choice worked

  • Reduces API server load and bandwidth pressure

  • Better scalability for document heavy workflows

  • Cleaner separation between metadata handling and file transport

Trade Off

  • URL expiry and bucket policy management become critical

  • Must enforce upload constraints carefully (size/type/prefix)

What I Learned

This project reinforced a lesson I’ve seen repeatedly: business correctness beats framework cleverness.

  • The hardest problems weren’t GraphQL syntax They were transactional integrity, schema evolution, and auth consistency.

  • A modular GraphQL schema is great, but only if operational concerns (logging, error shape, auth) are standardized as the codebase grows.

  • Shipping fast with ORM + resolver logic works, but eventually you want a clearer service layer to reduce duplication and improve testability.

I also relearned the importance of “boring” engineering: Predictable naming Explicit relationships and keeping side effects (like file transfer) out of core request paths.

Future Improvements

If I were taking this to the next maturity level, I’d prioritize:

  • Claim completeness assistant

Use AI to detect missing fields/documents before submission (e.g., absent discharge summary, unreadable invoice, missing policy number), reducing rejection cycles for users.

  • Smart document extraction (OCR + LLM validation)

Auto read uploaded PDFs/images and prefill reimbursement fields like hospital name, admission date, billed amount, and diagnosis text users spend less time on manual entry.

  • Eligibility and policy guidance chatbot

A context aware assistant can answer “Is this treatment covered?”, “What documents are needed?”, and “What is my claim stage?” in plain language using policy + claim data.

  • Fraud and anomaly flagging for faster, fairer review

AI can surface unusual billing patterns, duplicate invoices, or suspicious claim timelines so genuine users get faster approvals while risky cases are escalated.

  • Turnaround time prediction

Provide estimated settlement timelines using historical data, helping users set realistic expectations and reducing support queries.

  • Agent side productivity copilots

For internal ops teams, AI summaries of patient history, prior remarks, and pending actions can reduce handling time and improve user response quality.

Conclusion

This backend is a strong example of a practical, business first system: one GraphQL API, a relational data model shaped around real reimbursement operations, and a storage pattern that scales without overcomplicating deployment.

It’s not trying to be trendy architecture for architecture’s sake. It’s solving a high friction domain with sensible engineering choices and that’s exactly the kind of system I like building.