This publication presents Agentic Forex AI, a fully containerized multi-agent system that automates daily forex market analysis using news streams, technical indicators, and rule-based validation. It integrates FastAPI, Streamlit, and Prometheus-based observability for a real-time, interactive decision-support dashboard. The solution demonstrates how open-source orchestration of AI agents can produce trustworthy recommendations (BUY / SELL / AVOID) with traceable logic, logs, and performance metricsβdeployable via Docker and Railway.

Agentic Forex AI represents the culmination of the Agentic AI Developer Capstone Project, transforming a modular multi-agent prototype into a production-grade, observable, and reliable forex decision-support system. It integrates five coordinated agents β Market, News, Strategy, Validation, and Email β to generate daily recommendations across 20 major and minor currency pairs.
Each recommendation is derived from real-time market data (via yFinance) and sentiment cues (via RSS news parsing), validated through safety guards, and automatically delivered to users through an email notification pipeline. The system is fully containerized, combining FastAPI for orchestration, Streamlit for visualization, and Prometheus for monitoring and metrics collection.
Beyond its technical implementation, the project demonstrates key principles of responsible AI deployment: input validation, operational resilience, performance logging, and transparent observability. Deployed via Docker and Railway, it showcases how open, reproducible architectures can bridge the gap between intelligent automation and production reliability in real-world financial applications.
Forex traders and analysts often rely on fragmented toolsβsuch as economic calendars, RSS feeds, and chart patternsβto make time-sensitive decisions. Agentic Forex AI unifies these signals into a reproducible, explainable multi-agent system that delivers consistent, transparent recommendations daily.
By automating routine analysis and embedding observability, the system:
This system illustrates how modular AI design, when integrated with strong guardrails and documentation, can move from experimentation to reliability β a critical leap for any production-grade intelligent assistant.
The architecture unifies five operational layers inside a Docker container:
| Layer | Function | Technology |
|---|---|---|
| API Layer | Exposes REST endpoints (/api/run, /api/metrics, /api/health) | FastAPI + Uvicorn |
| Dashboard Layer | Streamlit interface for user interaction | Streamlit |
| Agent Layer | Market, News, Strategy, Validation, and Email agents | Custom Python Agents |
| Tools Layer | Core logic encapsulated as reusable tools | Python (Pydantic + Requests) |
| Monitoring Layer | Tracks metrics, health, and latency | Prometheus + Loguru |
| Agent | Function |
|---|---|
| Market Agent | Retrieves hourly/daily forex candles using yfinance_tool.py. |
| News Agent | Fetches and filters sentiment-rich news via RSS (news_tool.py). |
| Strategy Agent | Synthesizes insights to generate BUY/SELL/AVOID stances (strategy_tools.py). |
| Validation Guard | Validates currency pair formats and manages error resilience. |
| Email Agent | Sends summarized recommendations directly to the traderβs Gmail (email_tool.py). |
The agent pipeline operates as a single coordinated workflow triggered through safe_run_pipeline_once(). Each agent contributes independently but communicates synchronously β ensuring that a network or data failure in one agent doesnβt collapse the system.
| Tool File | Purpose |
|---|---|
strategy_tools.py | Defines strategy logic and confidence scoring. |
yfinance_tool.py | Retrieves live market candles and handles exceptions. |
news_tool.py | Fetches financial headlines with timestamp parsing and deduplication. |
email_tool.py | Automates daily strategy email delivery. |
mcp.py | MCP-compatible logging decorator for trace-based evaluations. |
The tools layer abstracts functionality, promoting reusability and consistency across agents. Each tool handles its own error management and logging β an important feature for reliable production pipelines.
Major Pairs:
EURUSD, USDJPY, GBPUSD, USDCHF, AUDUSD, USDCAD, NZDUSD
Minor Pairs:
EURGBP, EURJPY, GBPJPY, EURCHF, EURAUD, AUDJPY,
GBPCHF, NZDJPY, CADJPY, AUDNZD, EURCAD, CHFJPY, GBPAUD, AUDCAD, GBPCAD
All pairs are validated by input_validation.py using dynamic whitelisting to prevent unsupported or malformed currency symbols.
Agent interactions are coordinated through a lightweight orchestration layer implemented in graph.py. This layer:
safe_run_pipeline_once() wrapper for resilient executionThe orchestration framework is supported by additional modules that reinforce stability and predictability:
pipeline_safety.py β enforces timeout protection, retry logic, and safe execution wrappers for each stage.validation_guard.py β performs runtime checks, input validation, and controlled fallback behavior.Together, these components coordinate all agents in a predictable, fault-tolerant workflow while keeping the system modular and easy to extend.
While the system runs autonomously, several intentional checkpoints allow human review and intervention:
This balance of automation + human review ensures safe and explainable deployment of AI in financial contexts.
Agentic Forex AI incorporates several user-focused design principles to ensure clarity, transparency, and ease of use. The Streamlit dashboard emphasizes clean visual hierarchy, presenting market data, news signals, and recommendations in an intuitive layout that supports quick decision-making. Errors and warnings are surfaced directly to the user through descriptive messages and real-time status indicators, reducing confusion during failure scenarios. The systemβs modular visualizations, interactive controls, and trace logs were designed to help users understand why a recommendation was generated, strengthening trust and explainability.
The systemβs modular architecture ensures flexibility, clear role separation, and easy extensibility. The agents follow event-driven logic, reacting to market or news changes and recording every run as structured trace data.

| Layer | Technology |
|---|---|
| Backend | FastAPI, Uvicorn |
| Frontend | Streamlit |
| Agents | Python async, Requests, Pydantic models |
| Monitoring | Prometheus client + Loguru logging |
| Deployment | Docker / Railway PaaS |
| Data Sources | yFinance candles, RSS news feeds |
Each technology was selected for its reliability and open-source support, enabling transparent deployment and easy contribution from the developer community.
βββ api.py # FastAPI backend (main API routes) βββ dashboard.py # Streamlit dashboard UI βββ Dockerfile # Docker image configuration βββ requirements.txt # Python dependencies βββ supervisord.conf # Supervisor config for API + Dashboard βββ README.md # Project documentation β βββ src/ β βββ agents/ β β βββ market_agent.py # Fetches forex candles via yfinance β β βββ news_agent.py # Parses news from FXStreet, Investing.com, DailyFX β β βββ strategy_agent.py # Combines data β issues BUY/SELL/AVOID β β βββ email_agent.py # Sends daily strategy report to Gmail β β β βββ tools/ β β βββ yfinance_tool.py # Handles price data retrieval β β βββ rss_tool.py # RSS parsing and cleaning β β βββ strategy_tools.py # Core logic for combining market + news data β β βββ email_tool.py # Email sending utility (smtplib-based) β β β βββ guardrails/ β β βββ input_validation.py # Validates currency pairs and API input β β βββ pipeline_safety.py # Safe execution wrapper for resilience β β βββ validation_guard.py # Ensures runtime safety & fallback logic β β β βββ utils/ β β βββ logger.py # Logging utilities with Loguru β β β βββ graph.py # Defines multi-agent pipeline graph β βββ schemas.py # Pydantic models (Candle, NewsItem, Recommendation) β βββ main.py # Entry point for multi-agent orchestration β βββ data/ β βββ traces/ # JSON logs of each pipeline execution β βββ samples/ # Example test data or responses β βββ tests/ β βββ test_agents.py # Unit tests for each agent β βββ test_pipeline.py # Integration tests across agents β βββ test_end_to_end.py # End-to-end system tests
This structure promotes maintainability and traceability. The clear separation between agents, tools, guards, and utilities ensures that future developers can extend or modify components with minimal friction.
Agentic Forex AI uses a layered testing strategy to validate behavior from individual components up to full system runs.
| Type | Description |
|---|---|
| Unit Tests | Validate each agent, tool, and guard independently (located in tests/unit/), ensuring that market data fetchers, RSS parsers, and strategy logic behave correctly in isolation. |
| Integration Tests | Simulate multi-agent pipeline execution across components (in tests/integration/), verifying that Market β News β Strategy β Validation β Email work together as expected. |
| System / End-to-End Tests | Exercise the public FastAPI endpoints (/api/run, /api/health, /api/metrics) and the orchestration layer (in tests/system/), confirming that real requests trigger the full pipeline and produce traceable outputs. |
| Performance Tests | Measure latency and throughput under multiple currency pairs and repeated runs (in tests/performance/), validating that the system remains responsive and that Prometheus metrics reflect real-world load. |
Coverage Goal: β₯ 70% of all core functions under src/agents/ and src/tools.
Current coverage (measured via pytest --cov=src) is ~56% overall, with higher coverage on critical modules such as validation, schemas, orchestration (graph.py), and core tools (yfinance_tool.py, news_tool.py). Additional tests are planned for agent wrappers and pipeline safety fallbacks.
Example run:
pytest --maxfail=1 --disable-warnings -q --cov=src
| Aspect | Description |
|---|---|
| Purpose Clarity | Well-defined use case focused on automated forex strategy generation and daily execution. |
| Value & Impact | Real-world decision-support relevance β helping users interpret forex signals with actionable insights. |
| Technical Credibility | Reproducible pipelines with validated inputs and synchronized agent execution. |
| Usability & Documentation | Includes setup instructions, API routes, Streamlit interface, and monitoring endpoints. |
Each pipeline run logs structured JSON traces in /data/traces/ including timestamps, execution durations, validation flags, pair symbols, and email confirmations. This detailed logging ensures full traceability and helps evaluate both correctness and system health over time.
The system includes several resilience mechanisms:
pipeline_safety.py/data/traces/These resilience features ensure that transient network failures or malformed data do not break the pipeline.
Metrics exposed at /api/metrics include:
api_request_total β tracks total API requests.api_request_latency_seconds β measures response latency.api_health_status β reports application health (1 = healthy, 0 = degraded).These metrics enable proactive monitoring and performance tuning. With Prometheus (and optional Grafana dashboards), developers can track agent activity, request behavior, and overall system health in real time.
Agentic Forex AI provides further visibility through lightweight, developer-friendly observability features:
data/traces/, capturing agent execution order, timestamps, fallback logic, and validation outcomes./api/health route exposes the real-time status of the system, allowing both users and monitoring tools to detect degraded or failing states early.Together, these mechanisms ensure full transparency into pipeline behavior and support reliable, production-grade operation.
To run or extend Agentic Forex AI, users should have:
System Requirements
Python Dependencies
Installed automatically via requirements.txt:
Knowledge Prerequisites
Users should be familiar with:
To build and run the project locally:
docker build -t forex-ai . docker run -p 8501:8501 -p 8000:8000 forex-ai
Access the application via:
This design ensures complete reproducibility across environments. Developers can reproduce the exact deployment with a single Docker command, ensuring consistent results and dependency isolation.
Connect your GitHub repository to Railway.
Select Dockerfile as the deployment method.
Set the environment variable:
API_URL=https://<your-railway-app>.up.railway.app/api
Deploy your project.
Verify the logs β you should see messages like:
success: api entered RUNNING state success: dashboard entered RUNNING state
The Railway integration allows one-click CI/CD deployment, turning the local agentic system into a continuously running cloud service.
| Area | Current State | Planned Enhancement |
|---|---|---|
| Sentiment Analysis | Rule-based (TextBlob) | Integrate transformer-based FinBERT sentiment agent |
| Data Sources | Limited to RSS + yFinance | Add real-time broker feeds (Oanda API) |
| Evaluation | Binary confidence score | Implement historical performance tracking |
| Scalability | Single Docker instance | Deploy using Kubernetes or AWS Fargate for scaling |
While the current version focuses on reliability and modularity, future enhancements aim to introduce ML-based sentiment models, streaming data ingestion, and scalable container orchestration β paving the way toward a fully autonomous trading assistant.
Agentic Forex AI embodies the principles of safety, stability, and explainability, delivering practical value in real-world decision-making contexts. It serves as a demonstration of how AI systems can achieve transparency, traceability, and operational robustness while serving an applied domain, such as financial analytics.
Agentic Forex AI is an actively maintained technical asset, and improvements will be released as the system evolves. The following areas are monitored:
Planned support includes:
Support Channels:
Users can report bugs, request enhancements, or seek support through the GitHub Issues tab in the project repository. All reported items are reviewed regularly as part of ongoing maintenance.
The Creative Commons Attribution-ShareAlike (CC BY-SA) license was selected to align with the projectβs open innovation goals. It allows anyone to use, adapt, and distribute this work, provided proper credit is given and derivative works are shared under identical terms. This ensures collective improvement while maintaining attribution integrity. The βShareAlikeβ clause encourages transparent evolution of the system, reinforcing Ready Tensorβs mission of open, reproducible AI development.
/api/metrics/api/health