Connect with Us to Learn More - hello@remoder.com
Connect with Us to Learn More - hello@remoder.com
Welcome to the AI Agents Library — your all-in-one hub for exploring intelligent systems built by Remoder.
Here, you’ll find complete walkthroughs, videos, architectural diagrams, and PDF guides that break down how real-world AI agents are designed, deployed, and scaled.
From finance to healthcare, each project showcases how AI, DevOps, and system engineering come together to build next-generation intelligent automation. 🚀

This project builds a smart AI-powered financial advisor that helps users make better investment decisions.
The agent analyzes user input such as goals, risk tolerance, and time horizon — and responds with a personalized portfolio recommendation, risk explanation, and educational insight powered by an LLM (via Ollama).
1. FastAPI Backend (app/main.py)
• Handles all incoming requests (like /analyze or /recommend).
• Integrates components like the risk model, portfolio allocation logic, and LLM responses.
2. AI Engine (llm.py)
• Uses the Ollama server running locally in Docker.
• Sends prompts to a selected model (e.g., llama3, mistral, or deepseek) to generate explanations and insights.
• Example: “Explain this portfolio strategy to a beginner investor.”
3. Risk Model (risk_model.py)
• Uses Scikit-learn and simple numeric rules to estimate a user’s risk score (e.g., conservative, balanced, aggressive).
• Helps tailor recommendations based on individual profiles.
4. Portfolio Logic (portfolio.py)
• Uses the risk score to assign proportions between stocks, bonds, and cash.
5. LLM Integration (Ollama via Docker Compose)
• The Ollama container hosts and serves the LLM.
• FastAPI communicates with it through an internal Docker network (OLLAMA_BASE_URL=http://ollama:11434).
• This setup makes it lightweight, reproducible, and secure.
6. Dockerized Setup (docker-compose.yml)
• Spins up two services:
• api → FastAPI app
• ollama → Local LLM model
• The API waits until Ollama is ready before serving requests.
🌍 Example Workflow
1. User sends their financial goal and risk tolerance (via API or UI).
2. The system computes their risk score and builds an ideal portfolio mix.
3. Ollama’s LLM generates a human-readable explanation of the advice.
4. The result is returned as a JSON or natural-language recommendation.
⸻
💡 Why It’s Important
• Teaches engineers how to combine AI + finance + real-world logic.
• Bridges machine learning models (risk analysis) and LLMs (explanation generation).
• Provides a realistic foundation for AI-driven financial planning systems.
This is older version of the video but I still recommend watching it since it is a quick glance at the agent and its basics.
This is the part 2 of the original initially posted video on Linkedin.
In this video, we walk through the high-level architecture, explain the core components, and give you a solid understanding of how this AI-powered financial assistant is designed.
Whether you’re a DevOps engineer, cloud architect, or AI enthusiast—this series will show you how modern AI agents are built, deployed, and scaled using real engineering practices.
In this video, we dive deeper into the code, walk through the app functionality, and explore the Dockerfile that powers this AI project.
This is where theory meets hands-on engineering — and you’ll see exactly how the Financial Advisor AI Agent works behind the scenes.
✔️ Walkthrough of the main.py (FastAPI + model execution logic)
✔️ How the API structure is designed
✔️ How prompts and model names are passed into Ollama
✔️ Step-by-step review of the Dockerfile
✔️ How Python, FastAPI, and Ollama work together
✔️ The architecture behind running multiple models (e.g., Mistral, Gemma, Llama, DeepSeek)
Whether you’re new to AI agents or strengthening your DevOps/AI engineering skillset, this video builds a strong foundation before deployment.

© 2023 - 2025 Remoder Inc. All rights reserved. Terms, conditions, and policies apply.
Built in Chicago. Powered for the World.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.