3.9 KiB
3.9 KiB
Swing Trading Project with EUR/USD Using Oanda and scikit-learn
Step 1: Environment Setup
Install Python
Ensure Python 3.8+ is installed.
Create a Virtual Environment
Navigate to your project directory and run:
python -m venv venv
source venv/bin/activate # Unix/macOS
venv\Scripts\activate # Windows
deactivate
Install Essential Libraries
Create requirements.txt with the following content:
pandas
numpy
matplotlib
seaborn
scikit-learn
jupyterlab
oandapyV20
requests
Install with pip install -r requirements.txt.
Step 2: Project Structure
Organize your directory as follows:
swing_trading_project/
├── data/
├── notebooks/
├── src/
│ ├── __init__.py
│ ├── data_fetcher.py
│ ├── feature_engineering.py
│ ├── model.py
│ └── backtester.py
├── tests/
├── requirements.txt
└── README.md
Step 3: Fetch Historical Data
- Sign up for an Oanda practice account and get an API key.
- Use
oandapyV20indata_fetcher.pyto request historical EUR/USD data. Consider H4 or D granularity. - Save the data to
data/as CSV.
import os
import pandas as pd
from oandapyV20 import API
import oandapyV20.endpoints.instruments as instruments
# Configuration
ACCOUNT_ID = 'your_account_id_here'
ACCESS_TOKEN = 'your_access_token_here'
INSTRUMENTS = ['EUR_USD', 'USD_JPY', 'GBP_USD', 'AUD_USD', 'USD_CAD'] # Extendable to more pairs
GRANULARITY = 'H4' # Can be parameterized as needed
DATA_DIR = 'data'
def fetch_and_save_data(account_id, access_token, instruments, granularity, data_dir):
"""Fetch historical forex data for specified instruments and save to CSV."""
client = API(access_token=access_token)
if not os.path.exists(data_dir):
os.makedirs(data_dir)
for instrument in instruments:
params = {
"granularity": granularity,
"count": 5000 # Adjust based on needs
}
data_request = instruments.InstrumentsCandles(instrument=instrument, params=params)
data = client.request(data_request)
candles = data.get('candles', [])
if candles:
df = pd.DataFrame([{
'Time': candle['time'],
'Open': float(candle['mid']['o']),
'High': float(candle['mid']['h']),
'Low': float(candle['mid']['l']),
'Close': float(candle['mid']['c']),
'Volume': candle['volume']
} for candle in candles])
# Save to CSV
output_filename = f"{instrument.lower()}_data.csv"
df.to_csv(os.path.join(data_dir, output_filename), index=False)
print(f"Data saved for {instrument} to {output_filename}")
def main():
"""Main function to orchestrate data fetching and saving."""
print("Fetching data for instruments...")
fetch_and_save_data(ACCOUNT_ID, ACCESS_TOKEN, INSTRUMENTS, GRANULARITY, DATA_DIR)
print("Data fetching and saving complete.")
if __name__ == '__main__':
main()
Step 4: Exploratory Data Analysis
- Create a new Jupyter notebook in
notebooks/. - Load the CSV with
pandasand perform initial exploration. Plot closing prices and moving averages.
Step 5: Basic Feature Engineering
- In the notebook, add technical indicators as features (e.g., SMA 50, SMA 200, RSI) using
pandas. - Investigate the relationship between these features and price movements.
Step 6: Initial Model Training
- In
model.py, fit a simplescikit-learnmodel (e.g., LinearRegression, LogisticRegression) to predict price movements. - Split data into training and testing sets to evaluate the model's performance.
Step 7: Documentation
- Document your project's setup, objectives, and findings in
README.md.
Next Steps
- Refine features, try different models, and develop a backtesting framework as you progress.