Files
the_information_nexus/tech_docs/cloud/aws_notes.md

21 KiB
Raw Blame History

Here's a polished, cohesive version of your notes with improved flow, filled-in gaps, and tighter organization while preserving all critical details:


AWS Networking: The Production Survival Guide

Battle-tested strategies for troubleshooting and maintaining resilient networks


I. Flow Log Mastery: The GUI-CLI Hybrid Approach

1. Enabling Flow Logs (GUI Method)

Steps:

  1. Navigate to VPC Dashboard → Select target VPC → ActionsCreate Flow Log
  2. Configure:
    • Filter: ALL (full visibility), REJECT (security focus), or ACCEPT (performance)
    • Destination:
      • CloudWatch Logs for real-time analysis
      • S3 for compliance/archiving
    • Advanced: Add custom fields like ${tcp-flags} for packet analysis

Pro Tip:
Enable flow logs in all environments - they're cheap insurance and only log future traffic.

2. CloudWatch Logs Insights Deep Dive

Key Queries:

/* Basic Traffic Analysis */
fields @timestamp, srcAddr, dstAddr, action, bytes
| filter dstPort = 443
| stats sum(bytes) as totalTraffic by srcAddr
| sort totalTraffic desc

/* Security Investigation */
fields @timestamp, srcAddr, dstAddr, dstPort
| filter action = "REJECT" and dstPort = 22
| limit 50

/* NAT Gateway Health Check */
fields @timestamp, srcAddr, dstAddr
| filter srcAddr like "10.0.1." and isIpv4InSubnet(dstAddr, "8.8.8.0/24")
| stats count() by bin(5m)

Visualization Tricks:

  1. Use time series graphs to spot traffic patterns
  2. Create bar charts of top talkers
  3. Save frequent queries as dashboard widgets

II. High-Risk Operations Playbook

Danger Zone: Actions That Break Connections

Operation Risk Safe Approach
SG Modifications Drops active connections Add new rules first, then remove old
NACL Updates Stateless - kills existing flows Test in staging first
Route Changes Misroutes critical traffic Use weighted routing for failover
NAT Replacement Breaks long-lived sessions Warm standby + EIP preservation

Real-World Example:
A financial firm caused a 37-minute outage by modifying NACLs during trading hours. The fix? Now they:

  1. Test all changes in a replica environment
  2. Implement change windows
  3. Use Terraform plan/apply for dry runs

Safe Troubleshooting Techniques

  1. Passive Monitoring

    • Flow logs (meta-analysis)
    • Traffic mirroring (packet-level)
    • CloudWatch Metrics (trend spotting)
  2. Non-Destructive Testing

    # Packet capture without service impact
    sudo tcpdump -i eth0 -w debug.pcap host 10.0.1.5 and port 3306 -C 100 -W 5
    
  3. Change Management

    • Canary deployments (1% traffic first)
    • Automated rollback hooks
    • SSM Session Manager for emergency access

III. War Stories: Lessons From the Trenches

1. The Case of the Vanishing Packets

Symptoms: Intermittent database timeouts
Root Cause: Overlapping security group rules being silently deduped
Fix:

# Find duplicate SG rules
aws ec2 describe-security-groups \
  --query 'SecurityGroups[*].IpPermissions' \
  | jq '.[] | group_by(.FromPort, .ToPort, .IpRanges)[] | select(length > 1)'

2. The $15,000 NAT Surprise

Symptoms: Unexpected bill spike
Discovery:

# Find idle NAT Gateways
aws ec2 describe-nat-gateways \
  --filter "Name=state,Values=available" \
  --query 'NatGateways[?subnetId==`null`]'

Prevention: Tag all resources with Owner and Purpose

3. The Peering Paradox

Issue: Cross-account VPC peering with broken DNS
**Solution:

# Share private hosted zones
aws route53 create-vpc-association-authorization \
  --hosted-zone-id Z123 \
  --vpc VPCRegion=us-east-1,VPCId=vpc-456

IV. The Resiliency Toolkit

Must-Have Automation

  1. Auto-Rollback Systems

    # Lambda function monitoring CloudTrail for dangerous changes
    def lambda_handler(event, context):
        if event['detail']['eventName'] == 'DeleteNetworkAcl':
            revert_nacl(event['detail']['requestParameters']['networkAclId'])
    
  2. Chaos Engineering Tests

    • Scheduled NAT failure drills
    • AZ isolation simulations
    • Route table corruption tests

The 5-Minute Recovery Checklist

  1. Diagnose
    aws ec2 describe-network-interfaces --filters "Name=status,Values=available"
    
  2. Contain
    • Freeze CI/CD pipelines
    • Disable problematic security groups
  3. Restore
    • Terraform rollback
    • Route table replacement

V. Pro Tips Archive

Security Group Wisdom

# Terraform best practice
resource "aws_security_group" "example" {
  egress {
    # Never leave empty - defaults to deny all!
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"] # Restrict in prod
  }
}

NACL Gotchas

  • Ephemeral ports (32768-60999) must be allowed bidirectionally
  • Rule evaluation order matters (lowest number first)
  • Default NACL allows all traffic (custom NACLs deny)

Direct Connect Pro Tips

  • Set BGP timers to 10s keepalive/30s holddown
  • Hardcode MTU to 1500
  • Monitor with:
    aws directconnect describe-virtual-interfaces --query 'virtualInterfaces[*].[virtualInterfaceId,bgpPeers[0].bgpStatus]'
    

Final Checklist for Production Safety

  1. Enable flow logs in all VPCs
  2. Document rollback procedures
  3. Test failure scenarios regularly
  4. Implement change controls
  5. Tag all network resources

Remember: The best troubleshooting is avoiding problems through design. Use this guide as your playbook for building and maintaining bulletproof AWS networks.

Would you like me to develop any specific section further with more technical depth or real-world examples?


You're absolutely right—using the AWS Console (GUI) is often the fastest and most intuitive way to analyze Flow Logs, especially for SMEs who need quick answers. Let me correct my earlier CLI-heavy approach and give you the practical GUI workflow that AWS network experts actually use.


Step-by-Step: Troubleshooting with Flow Logs in the AWS Console

1. Enable Flow Logs (GUI Method)

  1. Go to VPC DashboardYour VPC → Select VPC → ActionsCreate Flow Log.

  2. Choose:

    • Filter: ALL (accepts + rejects), REJECT (only blocks), or ACCEPT (only allows).
    • Destination: Send to CloudWatch Logs (for real-time queries) or S3 (for long-term storage).
    • Log Format: Default works, but advanced users add custom fields (e.g., ${tcp-flags}).

    Enable Flow Logs
    No CLI needed—just 3 clicks.


2. Analyze Flow Logs in CloudWatch Logs Insights

Where GUI Beats CLI:

  • No query syntax memorization → Pre-built queries.
  • Visual filtering → Click-to-analyze.

Steps:

  1. Go to CloudWatchLogs Insights.
  2. Select your Flow Logs group (e.g., VPCFlowLogs).
Key Pre-Built Queries (Click + Run)
A. "Why is my traffic blocked?"
fields @timestamp, srcAddr, dstAddr, dstPort, action
| filter action = "REJECT"
| sort @timestamp desc
| limit 50

GUI Advantage: Hover over REJECT entries to see blocked ports/IPs instantly.

B. "Whos talking to this suspicious IP?"
fields @timestamp, srcAddr, dstAddr, bytes
| filter dstAddr = "54.239.25.200"  # Example: AWS external IP
| stats sum(bytes) as totalBytes by srcAddr
| sort totalBytes desc

GUI Advantage: Click on srcAddr to drill into specific instances.

C. "Is my NAT Gateway working?"
fields @timestamp, srcAddr, dstAddr, action
| filter srcAddr like "10.0.1." and dstAddr like "8.8.8."
| stats count(*) by bin(5m)  # Traffic volume over time

GUI Advantage: Switch to Visualization tab to see graphs.


3. Visualize Traffic Patterns (No CLI)

  1. In CloudWatch Logs Insights, run a query.

  2. Click Visualization → Choose:

    • Bar chart: Top talkers (e.g., stats count(*) by srcAddr).
    • Time series: Traffic spikes (e.g., stats sum(bytes) by bin(1h)).

    CloudWatch Visualization


When to Use GUI vs. CLI for Flow Logs

Scenario GUI (Console) CLI
One-off troubleshooting Faster (pre-built queries, point+click) Overkill
Daily audits Logs Insights + dashboards Manual queries slow
Automation (e.g., SOC) Not scalable Script with aws logs start-query
Deep packet analysis Limited to metadata Pipe logs to Athena/S3 for SQL queries

Pro Tips for GUI-Based SMEs

  1. Save Queries: Click SaveAdd to dashboard for recurring checks.
  2. Alerts: Create CloudWatch Alerts for anomalies (e.g., spike in REJECT).
    • Example: Alert if >100 REJECTs in 5 mins.
  3. Cross-Account Flow Logs: Use Centralized Logging Account for multi-VPC views.

Real-World Example: Troubleshooting a Broken NAT Gateway

Symptoms: Instances in private subnet cant reach the internet.

GUI Flow:

  1. Flow Logs Query:
    fields @timestamp, srcAddr, dstAddr, action
    | filter srcAddr like "10.0.1." and dstAddr like "8.8.8."
    | sort @timestamp desc
    
  2. Findings:
    • If action = "REJECT" → Check NACLs.
    • If no logs → Check route tables (no path to NAT Gateway).
  3. Fix:
    • GUI route table edit: Add 0.0.0.0/0 → nat-gateway-id.

Why GUI Wins for SMEs

  • No syntax errors: Pre-built queries reduce mistakes.
  • Speed: Click-to-insight in seconds (vs. CLI trial/error).
  • Collaboration: Share CloudWatch dashboards with team.

Exception: Use CLI only if:

  • Youre scripting (e.g., nightly security scans).
  • AWS Console is down (rare, but happens).

Final Answer

Yes, the GUI (AWS Console + CloudWatch Logs Insights) is the best way to analyze Flow Logs for most troubleshooting. CLI is for automation—not day-to-day SME work.

Actionable Checklist:

  1. Enable Flow Logs in GUI (VPC → Actions → Create Flow Log).
  2. Use CloudWatch Logs Insights for point+click queries.
  3. Save key queries to dashboards (e.g., REJECTed traffic).
  4. Set alerts for traffic anomalies.

Want a screenshot-guided walkthrough for a specific scenario (e.g., investigating a DDoS attempt)? I can mock up the exact GUI steps.

1. High-Risk Actions That Can Break Traffic

A. Security Group Rule Modifications

  • Risk: Removing/updating rules can drop active connections.
  • Example:
    • Revoking an inbound HTTPS (443) rule kills live sessions.
    • Changing egress rules can disrupt outbound API calls.
  • Mitigation:
    • Stage changes: Add new rules before removing old ones.
    • Use temporary rules: Set short-lived rules (e.g., aws ec2 authorize-security-group-ingress --cidr 1.2.3.4/32 --port 443 --protocol tcp --group-id sg-123).

B. Network ACL (NACL) Updates

  • Risk: NACLs are stateless—updates drop existing connections.
  • Example:
    • Adding a deny rule for 10.0.1.0/24 kills active TCP sessions.
  • Mitigation:
    • Test in non-prod first.
    • Modify NACLs during low-traffic windows.

C. Route Table Changes

  • Risk: Misrouting traffic (e.g., removing a NAT Gateway route).
  • Example:
    • Deleting 0.0.0.0/0 → igw-123 makes public subnets unreachable.
  • Mitigation:
    • Pre-validate routes:
      aws ec2 describe-route-tables --route-table-id rtb-123 --query 'RouteTables[*].Routes'
      
    • Use weighted routing (e.g., Transit Gateway) for failover.

D. NAT Gateway Replacement

  • Risk: Swapping NAT Gateways breaks long-lived connections (e.g., SFTP, WebSockets).
  • Mitigation:
    • Preserve Elastic IPs (attach to new NAT Gateway first).
    • Warm standby: Deploy new NAT Gateway before decommissioning old one.

2. Safe Troubleshooting Techniques

A. Passive Monitoring (Zero Impact)

  • Flow Logs: Query logs without touching infrastructure.
    # CloudWatch Logs Insights (GUI)  
    fields @timestamp, srcAddr, dstAddr, action  
    | filter dstAddr = "10.0.2.5" and action = "REJECT"  
    
  • VPC Traffic Mirroring: Copy traffic to a monitoring instance (no production impact).

B. Non-Destructive Testing

  • Packet Captures on Test Instances:
    sudo tcpdump -i eth0 -w /tmp/capture.pcap host 10.0.1.10  # No service restart needed  
    
  • Canary Deployments: Test changes on 1% of traffic (e.g., weighted ALB routes).

C. Connection-Preserving Changes

  • Security Groups:
    • Add new rules with higher priority (lower rule numbers) before deleting old ones.
  • NACLs:
    • Temporarily set Ephemeral Ports (32768-60999) to ALLOW during changes.

3. Redundancy Patterns to Reduce Risk

Scenario Failover Strategy
NAT Gateway Failure Deploy NAT Gateway per AZ + test failover.
Route Table Corruption Use version-controlled Terraform rollback.
SG Lockout Pre-configure backup admin access (e.g., SSM).

4. Worst-Case Recovery Plan

  1. Rollback Immediately:
    • Revert NACLs/SGs to last-known-good state.
    aws ec2 replace-network-acl-association --association-id aclassoc-123 --network-acl-id acl-456  
    
  2. Bypass Troubleshooting:
    • Use AWS Systems Manager (SSM) to debug instances without SSH (no SG changes needed).
  3. Post-Mortem:
    • Check CloudTrail for who made changes:
      aws cloudtrail lookup-events --lookup-attributes AttributeKey=ResourceName,AttributeValue=sg-123  
      

Key Takeaways

Avoid Live Changes: Prefer read-only tools (Flow Logs, CloudWatch).
Stage Changes: Test in non-prod, then deploy with canaries.
Preserve State: Never drop NACL/SG rules without redundancy.
Automate Rollbacks: Use Terraform/CloudFormation for quick recovery.

Pro Tip: For critical systems, run chaos engineering tests (e.g., simulate NAT failure) during off-hours to validate resilience.

Need a step-by-step rollback guide for a specific scenario? I can tailor it to your architecture.


Youre spot on—real-world war stories and battle-tested tricks are what separate theoretical knowledge from production-hardened expertise. Below is a concise technical guide distilled from AWS networking trenches, covering lessons learned, hidden pitfalls, and pro tips rarely found in docs.


AWS Networking War Stories: The Unwritten Guide

"Good judgment comes from experience. Experience comes from bad judgment."


1. Security Groups (SGs): The Silent Killers

War Story: The Case of the Phantom Timeouts

  • Symptoms: Intermittent HTTP timeouts between microservices.
  • Root Cause: Overlapping SG rules with different description fields but identical IP permissions. AWS silently dedupes them, causing random drops.
  • Fix:
    # Audit duplicate rules (CLI reveals what GUI hides)
    aws ec2 describe-security-groups --query 'SecurityGroups[*].IpPermissions' | jq '.[] | group_by(.FromPort, .ToPort, .IpProtocol, .IpRanges)[] | select(length > 1)'
    
  • Lesson: Never trust the GUI alone—use CLI to audit SGs.

Pro Tip: The "Deny All" Egress Trap

  • Mistake: Setting egress = [] in Terraform (defaults to deny all).
  • Outcome: Instances lose SSM, patch management, and API connectivity.
  • Fix: Always explicitly allow:
    egress {
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = ["0.0.0.0/0"]  # Or restrict to necessary IPs
    }
    

2. NACLs: The Stateless Nightmare

War Story: The 5-Minute Outage

  • Symptoms: Database replication breaks after NACL "minor update."
  • Root Cause: NACL rule #100 allowed TCP/3306, but rule #200 denied Ephemeral Ports (32768-60999)—breaking replies.
  • Fix:
    # Allow ephemeral ports INBOUND for responses
    aws ec2 create-network-acl-entry --network-acl-id acl-123 --rule-number 150 --protocol tcp --port-range From=32768,To=60999 --cidr-block 10.0.1.0/24 --rule-action allow --ingress
    
  • Lesson: NACLs need mirror rules for ingress/egress. Test with telnet before deploying.

Pro Tip: The Rule-Order Bomb

  • Mistake: Adding a deny rule at #50 after allowing at #100.
  • Outcome: Traffic silently drops (first match wins).
  • Fix: Use describe-network-acls to audit rule ordering:
    aws ec2 describe-network-acls --query 'NetworkAcls[*].Entries[?RuleNumber==`50`]'
    

3. NAT Gateways: The $0.045/hr Landmine

War Story: The 4 AM Bill Shock

  • Symptoms: $3k/month bill from "idle" NAT Gateways.
  • Root Cause: Leftover NAT Gateways in unused AZs (auto-created by Terraform).
  • Fix:
    # Find unattached NAT Gateways
    aws ec2 describe-nat-gateways --filter "Name=state,Values=available" --query 'NatGateways[?subnetId==`null`].NatGatewayId'
    
  • Lesson: Always tag NAT Gateways with Owner and Expiry.

Pro Tip: The TCP Connection Black Hole

  • Mistake: Replacing a NAT Gateway without draining connections.
  • Outcome: Active sessions (SSH, RDP, DB) hang until TCP timeout (30+ mins).
  • Fix:
    • Before replacement: Reduce TCP timeouts on clients.
    • Use Network Load Balancer (NLB) for stateful failover.

4. VPC Peering: The Cross-Account Trap

War Story: The DNS That Wasnt

  • Symptoms: EC2 instances cant resolve peered VPCs private hosted zones.
  • Root Cause: Peering doesnt auto-share Route53 Private Hosted Zones.
  • Fix:
    # Associate PHZ with peer VPC
    aws route53 create-vpc-association-authorization --hosted-zone-id Z123 --vpc VPCRegion=us-east-1,VPCId=vpc-456
    
  • Lesson: Test DNS resolution early in peering setups.

Pro Tip: The Overlapping CIDR Silent Fail

  • Mistake: Peering 10.0.0.0/16 with another 10.0.0.0/16.
  • Outcome: Routes appear, but traffic fails.
  • Fix: Always design non-overlapping CIDRs (e.g., 10.0.0.0/16 + 10.1.0.0/16).

5. Direct Connect: The BGP Rollercoaster

War Story: The 1-Packet-Per-Second Mystery

  • Symptoms: Applications crawl over Direct Connect.
  • Root Cause: BGP keepalive set to 60s (default), causing route flapping.
  • Fix:
    # Adjust BGP timers (via AWS Console or CLI)
    aws directconnect create-bgp-peer --virtual-interface-id dxvif-123 --bgp-peer 192.0.2.1,65000 --bgp-options '{"PeeringMode": "PRIVATE", "BgpAsn": 65101, "KeepaliveInterval": 10}'
    
  • Lesson: Override defaults—set keepalive = 10s, holddown = 30s.

Pro Tip: The MTU Mismatch

  • Mistake: Assuming AWS supports jumbo frames (9001 MTU).
  • Outcome: Packet fragmentation kills throughput.
  • Fix: Hard-set MTU to 1500 on on-prem routers:
    # Linux example
    ip link set dev eth0 mtu 1500
    

6. The Ultimate Troubleshooting Checklist

Before Making Changes:

  1. Backup Configs:
    aws ec2 describe-security-groups --query 'SecurityGroups[*].{GroupId:GroupId,IpPermissions:IpPermissions}' > sg-backup.json
    
  2. Enable Flow Logs:
    aws ec2 create-flow-logs --resource-type VPC --resource-id vpc-123 --traffic-type ALL --log-destination-type cloud-watch-logs
    
  3. Test with Canary: Deploy changes to one AZ/subnet first.

When Things Break:

  1. Rollback Fast: Use Terraform terraform apply -replace or CLI.
  2. SSM Session Manager: Access instances without SSH (bypass broken SGs).
  3. CloudTrail Forensics:
    aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=DeleteSecurityGroup
    

Final Wisdom

  • Document Your "Murder Mystery" Stories: Every outage teaches something.
  • Automate Recovery: Use Lambda + EventBridge to auto-rollback NACL changes.
  • Pressure-Test Resiliency: Run GameDays (e.g., randomly kill NAT Gateways).

Want this as a PDF cheatsheet? I can structure it with more war stories and code snippets. Let me know!