Connect with Us to Learn More - hello@remoder.com

Remoder
Remoder
  • Home
  • Video Gallery
  • Guides / Whitepapers
  • AI Agents
  • More
    • Home
    • Video Gallery
    • Guides / Whitepapers
    • AI Agents
  • Home
  • Video Gallery
  • Guides / Whitepapers
  • AI Agents

🤖 AI Agents

 

Welcome to the AI Agents Library — your all-in-one hub for exploring intelligent systems built by Remoder.


Here, you’ll find complete walkthroughs, videos, architectural diagrams, and PDF guides that break down how real-world AI agents are designed, deployed, and scaled.


From finance to healthcare, each project showcases how AI, DevOps, and system engineering come together to build next-generation intelligent automation. 🚀

🧠 Lab 2 Project 1: Financial Advisor AI Agent

 

🧠 What It Does


This project builds a smart AI-powered financial advisor that helps users make better investment decisions.

The agent analyzes user input such as goals, risk tolerance, and time horizon — and responds with a personalized portfolio recommendation, risk explanation, and educational insight powered by an LLM (via Ollama).


⚙️ How It Works

 

   1.  FastAPI Backend (app/main.py)
   •   Handles all incoming requests (like /analyze or /recommend).
   •   Integrates components like the risk model, portfolio allocation logic, and LLM responses.

   2.  AI Engine (llm.py)
   •   Uses the Ollama server running locally in Docker.
   •   Sends prompts to a selected model (e.g., llama3, mistral, or deepseek) to generate explanations and insights.
   •   Example: “Explain this portfolio strategy to a beginner investor.”

   3.  Risk Model (risk_model.py)
   •   Uses Scikit-learn and simple numeric rules to estimate a user’s risk score (e.g., conservative, balanced, aggressive).
   •   Helps tailor recommendations based on individual profiles.

   4.  Portfolio Logic (portfolio.py)
   •   Uses the risk score to assign proportions between stocks, bonds, and cash.

   5.  LLM Integration (Ollama via Docker Compose)
   •   The Ollama container hosts and serves the LLM.
   •   FastAPI communicates with it through an internal Docker network (OLLAMA_BASE_URL=http://ollama:11434).
   •   This setup makes it lightweight, reproducible, and secure.

   6.  Dockerized Setup (docker-compose.yml)
   •   Spins up two services:
   •   api → FastAPI app
   •   ollama → Local LLM model
   •   The API waits until Ollama is ready before serving requests.


🌍 Example Workflow


   1.  User sends their financial goal and risk tolerance (via API or UI).
   2.  The system computes their risk score and builds an ideal portfolio mix.
   3.  Ollama’s LLM generates a human-readable explanation of the advice.
   4.  The result is returned as a JSON or natural-language recommendation.

⸻


💡 Why It’s Important


   •   Teaches engineers how to combine AI + finance + real-world logic.
   •   Bridges machine learning models (risk analysis) and LLMs (explanation generation).
   •   Provides a realistic foundation for AI-driven financial planning systems.


 

Quick Glance - Part 1

This is older version of the video but I still recommend watching it since it is a quick glance at the agent and its basics. 

Quick Glance - Part 2

This is the part 2 of the original initially posted video on Linkedin. 

📊 Financial Advisor AI Agent – Remoder Lab 2 (Project 1) | Part 1

 

In this video, we walk through the high-level architecture, explain the core components, and give you a solid understanding of how this AI-powered financial assistant is designed.


Whether you’re a DevOps engineer, cloud architect, or AI enthusiast—this series will show you how modern AI agents are built, deployed, and scaled using real engineering practices.

📊 Financial Advisor AI Agent – Remoder Lab 2 (Project 1) | Part 2

 

In this video, we dive deeper into the code, walk through the app functionality, and explore the Dockerfile that powers this AI project.


This is where theory meets hands-on engineering — and you’ll see exactly how the Financial Advisor AI Agent works behind the scenes.


🔍 What You’ll Learn in This Video


✔️ Walkthrough of the main.py (FastAPI + model execution logic)

✔️ How the API structure is designed

✔️ How prompts and model names are passed into Ollama

✔️ Step-by-step review of the Dockerfile

✔️ How Python, FastAPI, and Ollama work together

✔️ The architecture behind running multiple models (e.g., Mistral, Gemma, Llama, DeepSeek)


Whether you’re new to AI agents or strengthening your DevOps/AI engineering skillset, this video builds a strong foundation before deployment.

🧠 Lab 2 Project 2: Pneumonia detection ai agent

 

Stay Tuned!




 

© 2023 - 2025 Remoder Inc. All rights reserved. Terms, conditions, and policies apply. 

Built in Chicago. Powered for the World.

  • Home
  • Video Gallery
  • Guides / Whitepapers
  • AI Agents

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept