Connect with Us to Learn More - hello@remoder.com
Signed in as:
filler@godaddy.com
Connect with Us to Learn More - hello@remoder.com
Signed in as:
filler@godaddy.com
Welcome to the AI Agents Library β your all-in-one hub for exploring intelligent systems built by Remoder.
Here, youβll find complete walkthroughs, videos, architectural diagrams, and PDF guides that break down how real-world AI agents are designed, deployed, and scaled.
From finance to healthcare, each project showcases how AI, DevOps, and system engineering come together to build next-generation intelligent automation. π

This project builds a smart AI-powered financial advisor that helps users make better investment decisions.
The agent analyzes user input such as goals, risk tolerance, and time horizon β and responds with a personalized portfolio recommendation, risk explanation, and educational insight powered by an LLM (via Ollama).
1. FastAPI Backend (app/main.py)
β’ Handles all incoming requests (like /analyze or /recommend).
β’ Integrates components like the risk model, portfolio allocation logic, and LLM responses.
2. AI Engine (llm.py)
β’ Uses the Ollama server running locally in Docker.
β’ Sends prompts to a selected model (e.g., llama3, mistral, or deepseek) to generate explanations and insights.
β’ Example: βExplain this portfolio strategy to a beginner investor.β
3. Risk Model (risk_model.py)
β’ Uses Scikit-learn and simple numeric rules to estimate a userβs risk score (e.g., conservative, balanced, aggressive).
β’ Helps tailor recommendations based on individual profiles.
4. Portfolio Logic (portfolio.py)
β’ Uses the risk score to assign proportions between stocks, bonds, and cash.
5. LLM Integration (Ollama via Docker Compose)
β’ The Ollama container hosts and serves the LLM.
β’ FastAPI communicates with it through an internal Docker network (OLLAMA_BASE_URL=http://ollama:11434).
β’ This setup makes it lightweight, reproducible, and secure.
6. Dockerized Setup (docker-compose.yml)
β’ Spins up two services:
β’ api β FastAPI app
β’ ollama β Local LLM model
β’ The API waits until Ollama is ready before serving requests.
π Example Workflow
1. User sends their financial goal and risk tolerance (via API or UI).
2. The system computes their risk score and builds an ideal portfolio mix.
3. Ollamaβs LLM generates a human-readable explanation of the advice.
4. The result is returned as a JSON or natural-language recommendation.
βΈ»
π‘ Why Itβs Important
β’ Teaches engineers how to combine AI + finance + real-world logic.
β’ Bridges machine learning models (risk analysis) and LLMs (explanation generation).
β’ Provides a realistic foundation for AI-driven financial planning systems.
This is older version of the video but I still recommend watching it since it is a quick glance at the agent and its basics.
This is the part 2 of the original initially posted video on Linkedin.
In this video, we walk through the high-level architecture, explain the core components, and give you a solid understanding of how this AI-powered financial assistant is designed.
Whether youβre a DevOps engineer, cloud architect, or AI enthusiastβthis series will show you how modern AI agents are built, deployed, and scaled using real engineering practices.
βοΈ High-level diagram of the Financial Advisor AI Agent
βοΈ Brief overview of the system architecture
βοΈ The core components: FastAPI, Ollama, Python, and more
βοΈ How this project fits into AI + DevOps workflow
βοΈ What to expect in the upcoming videos
This is a beginner-friendly, engineer-focused walkthrough designed to show how real AI systems are designed before writing code or building infrastructure.
In this video, we dive deeper into the code, walk through the app functionality, and explore the Dockerfile that powers this AI project.
This is where theory meets hands-on engineering β and youβll see exactly how the Financial Advisor AI Agent works behind the scenes.
βοΈ Walkthrough of the main.py (FastAPI + model execution logic)
βοΈ How the API structure is designed
βοΈ How prompts and model names are passed into Ollama
βοΈ Step-by-step review of the Dockerfile
βοΈ How Python, FastAPI, and Ollama work together
βοΈ The architecture behind running multiple models (e.g., Mistral, Gemma, Llama, DeepSeek)
Whether youβre new to AI agents or strengthening your DevOps/AI engineering skillset, this video builds a strong foundation before deployment.
π What Youβll Learn in This Video In this part, we cover:
βοΈ A walkthrough of the core app logic
βοΈ How the financial advisor agent processes user inputs
βοΈ The flow between FastAPI β Python β Ollama
βοΈ How model names, prompts, and parameters are passed around
βοΈ Where to plug in reasoning, validation, and financial logic
βοΈ Understanding how the agent βthinksβ before generating output This session is perfect for DevOps, Cloud, and AI engineers who want to understand the inner workings of an AI agent before we move into deployment.
In this episode, we shift from just running the app to packaging it properly with Docker and Docker Compose so itβs portable, repeatable, and ready for real-world environments. We walk through the Dockerfile, break down each instruction, and then set up a Docker Compose configuration to manage the application cleanly.
π What Youβll Learn in This Video:
βοΈ Detailed walkthrough of the Dockerfile for the Financial Advisor AI Agent
βοΈ How we install Python, FastAPI, and Ollama in a container
βοΈ Best practices for structuring Docker images for AI workloads
βοΈ How Docker Compose ties services together (API, models, etc.)
βοΈ How to build and run the app with simple Docker/Docker Compose commands
βοΈ Why containerization is critical for AI engineering and deployment
In this video, we finally bring everything together and run real requests through the AI agent, showing you exactly how it responds, reasons, and provides financial insights using LLMs.
This is the part where our app stops being βjust codeβ and becomes a working AI system.
π What Youβll Learn in This Video:
βοΈ How to send real prompts to the Financial Advisor AI Agent
βοΈ Understanding the full request β model β response flow
βοΈ Live demo using curl
βοΈ How the agent interprets financial questions
βοΈ How LLama model responds
βοΈ What the response looks like in JSON
βοΈ Troubleshooting and validating outputs This is your first look at the agent operating like a real-world AI service.
In this video, we step back from the code and the infrastructure to talk about the real purpose behind this project. This isnβt just a coding exercise β itβs a clear example of why companies are building internal AI agents and why engineers need these skills today.
π What Youβll Learn in This Video:
βοΈ Why this Financial Advisor AI Agent exists
βοΈ The problems it solves inside real organizations
βοΈ How LLMs can assist with financial planning, risk analysis, and decision support
βοΈ Why DevOps + AI engineering is becoming essential
βοΈ How this project teaches the foundation of secure AI deployment
βοΈ How similar agents can be built for healthcare, compliance, enterprise documentation, etc.
This part answers the most important question:
βWhy should anyone invest time in building AI agents at all?β
In this video, we walk through a full cleanup and teardown of everything we built β removing containers, stopping services, and resetting the environment back to zero.
This step is essential for engineers working with Docker, AI workloads, and test environments to ensure your machine stays clean and ready for future projects.
π What Youβll Learn in This Video
βοΈ How to stop all running Docker containers
βοΈ Removing Docker Compose services
βοΈ Ensuring no leftover processes or ports are running
βοΈ How to safely βdestroyβ everything after the demo
This part gives you the exact commands and workflow to tear down the entire AI agent environment so you can start fresh or move to the next lab.
Project 1, Version 1 (PDF Guide)
This detailed guide walks you through the creation of a full Financial Advisor AI Agent, designed to analyze user financial profiles, compute risk scores, build investment portfolios, and generate natural-language advice using modern LLMs. It breaks down the complete flow β from API request β risk engine β portfolio builder β LLM β final JSON response β in a simple, visual, and beginner-friendly way.
Inside, youβll learn how the system works end-to-end, how each component is engineered, and how real AI agents are designed, deployed, and prepared for production environments. The guide includes diagrams, examples, JSON payloads, architecture walkthroughs, and explanations tailored for engineers who want to master AI systems engineering.
Whether youβre a DevOps engineer, cloud practitioner, or aspiring AI systems engineer, this PDF provides a real, practical blueprint for building secure, modular, and scalable AI agents.
A perfect companion to the Remoder Master AI Deployment video series.
π Download includes:

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.