Files
the_information_nexus/projects/forex_algo_trading.md

3.9 KiB

Swing Trading Project with EUR/USD Using Oanda and scikit-learn

Step 1: Environment Setup

Install Python

Ensure Python 3.8+ is installed.

Create a Virtual Environment

Navigate to your project directory and run:

python -m venv venv
source venv/bin/activate  # Unix/macOS
venv\Scripts\activate     # Windows

Install Essential Libraries

Create requirements.txt with the following content:

pandas
numpy
matplotlib
seaborn
scikit-learn
jupyterlab
oandapyV20
requests

Install with pip install -r requirements.txt.

Step 2: Project Structure

Organize your directory as follows:

swing_trading_project/
├── data/
├── notebooks/
├── src/
│   ├── __init__.py
│   ├── data_fetcher.py
│   ├── feature_engineering.py
│   ├── model.py
│   └── backtester.py
├── tests/
├── requirements.txt
└── README.md

Step 3: Fetch Historical Data

  • Sign up for an Oanda practice account and get an API key.
  • Use oandapyV20 in data_fetcher.py to request historical EUR/USD data. Consider H4 or D granularity.
  • Save the data to data/ as CSV.
import csv
import os
from oandapyV20 import API    # The Oanda API wrapper
import oandapyV20.endpoints.instruments as instruments
from datetime import datetime
import pandas as pd

# Configuration
ACCOUNT_ID = 'your_account_id_here'
ACCESS_TOKEN = 'your_access_token_here'
INSTRUMENT = 'EUR_USD'
GRANULARITY = 'H4'  # 4-hour candles
OUTPUT_FILENAME = 'eur_usd_data.csv'

# Directory for saving the data
DATA_DIR = 'data'
if not os.path.exists(DATA_DIR):
    os.makedirs(DATA_DIR)

def fetch_data(account_id, access_token, instrument, granularity):
    """Fetch historical forex data for a specified instrument and granularity."""
    client = API(access_token=access_token)
    params = {
        "granularity": granularity,
        "count": 5000  # Maximum data points to fetch in one request
    }
    
    # Create a data request
    data_request = instruments.InstrumentsCandles(instrument=instrument, params=params)
    data = client.request(data_request)
    
    return data['candles']

def save_to_csv(data, filename):
    """Save fetched forex data to a CSV file."""
    filepath = os.path.join(DATA_DIR, filename)
    with open(filepath, mode='w', newline='') as file:
        writer = csv.writer(file)
        writer.writerow(['Time', 'Open', 'High', 'Low', 'Close', 'Volume'])
        
        for candle in data:
            writer.writerow([
                candle['time'],
                candle['mid']['o'],
                candle['mid']['h'],
                candle['mid']['l'],
                candle['mid']['c'],
                candle['volume']
            ])

def main():
    """Main function to fetch and save EUR/USD data."""
    print("Fetching data...")
    data = fetch_data(ACCOUNT_ID, ACCESS_TOKEN, INSTRUMENT, GRANULARITY)
    print(f"Fetched {len(data)} data points.")
    
    print("Saving to CSV...")
    save_to_csv(data, OUTPUT_FILENAME)
    print(f"Data saved to {os.path.join(DATA_DIR, OUTPUT_FILENAME)}")

if __name__ == '__main__':
    main()

Step 4: Exploratory Data Analysis

  • Create a new Jupyter notebook in notebooks/.
  • Load the CSV with pandas and perform initial exploration. Plot closing prices and moving averages.

Step 5: Basic Feature Engineering

  • In the notebook, add technical indicators as features (e.g., SMA 50, SMA 200, RSI) using pandas.
  • Investigate the relationship between these features and price movements.

Step 6: Initial Model Training

  • In model.py, fit a simple scikit-learn model (e.g., LinearRegression, LogisticRegression) to predict price movements.
  • Split data into training and testing sets to evaluate the model's performance.

Step 7: Documentation

  • Document your project's setup, objectives, and findings in README.md.

Next Steps

  • Refine features, try different models, and develop a backtesting framework as you progress.