Case Study

SDLC Analyzer

A backend system that turns early software requirement inputs into structured engineering, security, and compliance signals using model-assisted analysis behind stable API boundaries.

.NET 8 ASP.NET Core ML.NET Hugging Face Clean Architecture

Problem Space

What needed to work well

Teams often discover security and compliance issues too late in the lifecycle because requirement inputs are not translated into actionable engineering signals early enough.

The goal was to build a backend system that could accept requirement context, run analysis workflows, and return useful insights in a structured, repeatable way.

Responsibilities

What I owned

  • Designed backend APIs for submitting inputs and returning analysis results.
  • Integrated ML-based analysis support without collapsing architecture boundaries.
  • Structured the system with clean architecture principles.
  • Protected the handling of inputs, outputs, and error behavior.

Architecture

How the backend stayed understandable

Analytical systems get messy quickly when model logic, transport logic, and domain rules are mixed. I used clean architecture so the platform could stay evolvable as the analysis layer changed.

  • API layer to accept inputs and expose analysis results
  • Application layer to coordinate analysis workflows
  • Domain layer to represent SDLC concepts and rules
  • Infrastructure layer for ML.NET models and external integrations

Security

Handling analysis safely

Even when a product is insight-driven, the backend still needs disciplined controls around input handling, endpoint access, and how much internal behavior gets exposed.

  • Validated and sanitized inputs before processing
  • Restricted access to analysis endpoints
  • Avoided exposing internal model details through the API surface
  • Centralized error handling for safe failure responses

Reliability

Supporting heavier analytical work

Analysis-heavy systems need to remain responsive even when workloads vary, so the backend was shaped to keep transport and orchestration concerns cleanly separated.

  • Optimized request processing to avoid unnecessary blocking
  • Kept APIs stateless to support scaling strategies later
  • Separated analysis responsibilities from transport handling

Next Iteration

What I would improve next

  • Add asynchronous processing for heavier analysis workloads.
  • Introduce caching for repeated or similar analysis requests.
  • Expand observability with structured logs and better diagnostics.
  • Support broader SDLC stages and richer model inputs over time.

Next Move

Working on a backend that mixes engineering and analysis?

That blend is especially interesting to me when the architecture still has to stay secure, testable, and understandable for the team shipping it.