Full-stack implementation and AI workflow design
AI-Powered Web Platforms
Practical AI integrations across hosted and self-hosted models, with a focus on usable workflows instead of novelty demos.
- OpenAI
- Gemini
- Ollama
- LiteLLM
Context
This body of work is less about one isolated app and more about a repeated question: what happens when AI moves from a demo feature into an operational part of a product or workflow?
Build highlights
- Integrated hosted models such as OpenAI and Gemini where speed to iteration mattered.
- Tested self-hosted open-source models with Ollama when control, latency, or cost tradeoffs were more important.
- Used LiteLLM as a practical abstraction layer to keep experimentation from becoming provider lock-in.
Challenge and tradeoffs
The interesting work is rarely in calling the model. It is in deciding what deserves automation, what needs guardrails, and what should stay human because the product gets worse when the system tries to do too much.
Result
These experiments continue to shape how I think about applied AI: useful when grounded in workflow design, weak when treated as decoration.