Discord Logging Platform
Built distributed logging and moderation infrastructure that processes messages, moderation actions, and voice activity for real-time operational insight.
backend engineer / distributed systems / operational tooling
Backend engineer with 3+ years of experience building scalable APIs, distributed systems, production backend services, logging platforms, and commerce infrastructure.
Systems I've built
This section is intentionally not a project gallery. It is a scan of the systems, scale, constraints, and production responsibilities behind the work.
Built distributed logging and moderation infrastructure that processes messages, moderation actions, and voice activity for real-time operational insight.
Led scalable REST API development for ONDC order flow, tracking, operational event monitoring, and logistics-side service communication.
Implemented Redis-based caching for high-traffic API paths while keeping source-of-truth services and invalidation behavior explicit.
Designed delayed payment disbursement, automated auction reassignment, scheduled financial jobs, ERP integrations, and reporting workflows.
Designed event ingestion and search workflows for real-time moderation insights, operational monitoring, and operator-facing automation.
Created coin-based private voice channels with billing modes, access control, ownership transfer, and lifecycle cleanup.
System dossiers
Each dossier explains the backend problem, why the architecture mattered, and how the system behaves under real operational pressure.
system dossier / 01
Backend owner for Discord event ingestion, logging, and workflow design
Large Discord communities need message activity, moderation actions, and voice events to be ingested, searched, and explained fast enough for live operators.
Discord gateway events flow through ingestion handlers, moderation workflows, Redis-backed operational paths, MongoDB persistence, and operator-facing insight surfaces.
Separated interaction handling from event ingestion, kept moderation workflows permission-aware, and shaped event records for search, debugging, and operational monitoring.
system dossier / 02
Graduate Engineer Trainee - backend APIs and integration workflows
Logistics order flow requires fast API responses, reliable tracking data, centralized logs, and integration paths that can handle marketplace and fulfillment-side constraints.
ONDC-facing REST APIs connected order flow, Redis cache paths, structured S3 logs, monitoring workflows, and dashboard surfaces for operational visibility.
Improved read performance with Redis-backed paths and moved operational logs into durable storage for inspection without slowing request handling.
system dossier / 03
Software Developer for Magnolia Pearl commerce backend workflows
Commerce operations needed backend workflows for delayed payment disbursement, auction reassignment, financial processing, reporting, and external ERP integrations.
ASP.NET and Razor Pages workflows coordinate PayPal delayed disbursements, scheduled background jobs, SQL reporting paths, Azure Blob Storage, and third-party ERP APIs.
Designed transactional workflow boundaries for bidding and financial operations so scheduled jobs, reporting, and external integrations can evolve without fragile handoffs.
system dossier / 04
Backend developer for real-time community systems
Live communities need voice-channel lifecycle automation, access control, user support, and moderation feedback loops that work while people are actively using them.
Gateway events drive voice-channel state, coin billing, membership permissions, support actions, and cleanup jobs.
Designed workflows around explicit ownership, membership state, and cleanup paths so real-time resources do not drift out of sync with user actions.
Architecture visualization
These diagrams show the pattern language I use: validate the boundary, isolate the workflow, persist enough context, and make the system observable.
A request path that keeps user-facing actions fast while preserving enough context for retries, audits, and operator review.
Operational logs are designed as a product surface: queryable, explainable, and tied to the user or workflow that created them.
Cache hot reads close to the API while keeping source-of-truth ownership and invalidation rules explicit.
Engineering notes
Short writeups that make the portfolio feel like a working notebook: cache semantics, logging quality, background jobs, and real-time state.
The cache has to make ownership explicit: what is authoritative, how entries expire, which paths can tolerate stale reads, and what operators should inspect when behavior diverges.
A useful log stream carries actor, target, workflow, correlation id, and state transition. That makes moderation and order systems debuggable without reconstructing history from raw noise.
Payment disbursement, auction reassignment, reporting, and cleanup jobs should map to clear business states so retries are safe and failures are visible to humans.
Voice channels, permissions, and live moderation flows need cleanup and reconciliation paths. The hard part is keeping resources aligned after disconnects, retries, and manual intervention.
Technical inventory
Grouped by how backend work actually gets done: language, service layer, persistence, cloud primitives, and event-driven workflows.
Production timeline
Roles where I shipped APIs, scheduled jobs, integrations, logging paths, monitoring workflows, and operator-facing automation.
Texas, US / ASP.NET, C#, Razor Pages, SQL, Azure Blob Storage, PayPal
New Delhi, IN / Node.js, Express.js, MongoDB, Redis, Amazon S3
Remote, IN / Node.js, Express.js, PostgreSQL
Remote, IN / Node.js, Express.js, MongoDB, Bcrypt, JWT
Operating philosophy
The more operational pressure a backend takes, the more it rewards clear ownership, explicit state, useful logs, and boring reliability.
I separate hot paths from operational workflows, cache the right reads, and design APIs around bounded payloads, clear ownership, and predictable failure modes.
Production code should make failure visible. I prefer explicit state transitions, durable logs, defensive integrations, and workflows that can be retried safely.
I optimize for future operators: small service boundaries, readable handlers, intentional schemas, and documentation that explains tradeoffs instead of restating code.
Owning a backend means caring about deploys, data integrity, observability, support workflows, and the humans who need to debug the system at 2 AM.
I treat logs, audit trails, dashboards, and transcripts as product surfaces. They shorten incidents, explain system behavior, and build trust with internal teams.
Credentials
A compact view of the academic and hackathon credentials behind the backend work.
Education
Bachelor of Technology, Computer Science and Engineering
Award
LNMHacks 5.0
Award
ONDC ecosystem challenge