I am an applied decision scientist building uncertainty-aware forecasting and decision support systems for high-stakes, data-scarce environments. I specialize in Bayesian modeling, inventory analytics, and human-in-the-loop workflows across regulated domains such as biotech and healthcare.
My work emphasizes principled modeling, honest failure analysis, and designing analytical systems that make automated inference workflows reliable under uncertainity.
Most data science portfolios optimize for predictive accuracy in static benchmark settings. My work instead focuses on decision-grade reliability in real operational environments characterized by sparse data, nonstationarity, and asymmetric failure costs.
Across projects, I design systems that:
This perspective is informed by experience in regulated domains where false confidence, silent failure, and automation bias carry real-world consequences.
This project explores stochastic inventory forecasting under severe covariate scarcity using Poisson–Gamma conjugacy and a waste-constrained restocking policy.
It demonstrates both a principled Bayesian modeling approach and the structural limits of automated forecasting in nonstationary, human-driven consumption systems.
Key contributions:
Artifacts:
Key takeaway:
Uncertainty modeling revealed the true complexity of the consumption process. Assumptions about the stochastic process and decision rule were insufficient to consistently provide decision-grade forecasts, making expert oversight more reliable than fully automated inventory control.
This project develops a safety-aware diagnostic inference system on the Breast Cancer Wisconsin dataset that explicitly models and intercepts failure modes in probabilistic classifiers. Rather than optimizing for overall accuracy, the system introduces a post-classification reliability layer that detects high-risk predictions using geometry-derived signals and routes them to human review, enforcing a zero–false-negative constraint in clinically ambiguous regions. The core contribution is a selective inference control system that extracts additional risk signals from the information geometry of class-conditional feature manifolds, enabling reliable automation in the presence of deep class overlap and overconfident model failures.
Key contributions:
Artifacts:
Key takeaway: By explicitly modeling geometric failure modes and enforcing selective abstention, this system transforms a high-performance classifier into a controllable decision system with principled human oversight, illustrating a general framework for deploying machine learning safely in high-stakes environments.
Jithakrishna Prakash
📧 jprakashoff@gmail.com
🔗 LinkedIn: https://linkedin.com/in/jithakrishna-prakash
💻 GitHub: https://github.com/TheFifthPostulate