Add tech_docs/AWS_Lambda_for_Amazon_Connect_Survey_Processing.md

This commit is contained in:
2025-08-04 20:45:37 -05:00
parent 784f840e50
commit 07bd2f414c

View File

@@ -0,0 +1,537 @@
### **Code Review: AWS Lambda for Amazon Connect Survey Processing**
This document provides a peer code review of the provided Python script for an AWS Lambda function.
---
#### **File:** `lambda_function.py`
**Purpose:** This Lambda function processes Amazon Connect Contact Trace Records (CTRs) from a Kinesis stream, extracts customer satisfaction survey data, enriches it with agent details, and stores the results in a DynamoDB table. It also includes an email notification system for errors and critical events.
---
### **Review Summary**
Overall, the code is functional and achieves its stated goal. The logic is understandable, and it incorporates good practices like using environment variables for configuration. However, there are significant opportunities to improve its robustness, maintainability, and adherence to industry best practices. The primary areas for improvement are code structure, error handling, and dependency management.
---
### **Detailed Review**
#### **1. Architecture & Design**
* **Critique:** The code is a monolithic script with a few functions. The `main_logic` function is particularly long and handles multiple responsibilities: data parsing, validation, API calls, and database writes. This makes the code hard to read, debug, and test.
* **Recommendation:**
* **Modularity:** Break down `main_logic` into smaller, single-responsibility functions. For instance, `process_record(record)`, `validate_survey_data(data)`, `enrich_agent_data(agent_id)`, and `store_data(item)`. This improves readability and allows for easier unit testing.
* **Dependency Injection:** Instead of relying on global variables for AWS clients (`ses_client`, `acClient`, `dbclient`), pass them as arguments to the functions that need them. This makes the code more testable by allowing mocks to be injected during testing.
#### **2. Error Handling & Robustness**
* **Critique:**
* **Broad Exception Catching:** The code uses bare `except` blocks or `except Exception` in several places. This is a bad practice as it can suppress unexpected errors (like `KeyboardInterrupt`) and makes it difficult to debug the root cause.
* **Inconsistent Logging:** The code uses `print()` for logging. This is less effective than using Python's built-in `logging` module, which provides log levels, structured output, and better integration with CloudWatch.
* **Error Reporting:** The error notification email only sends `str(e)`, which often lacks the detail needed to diagnose an issue.
* **Recommendation:**
* **Specific Exceptions:** Catch specific exceptions like `json.JSONDecodeError`, `KeyError`, or `boto3.exceptions.ClientError` where appropriate.
* **Centralized Error Handler:** Implement a centralized error-handling function that logs the full stack trace and provides a consistent, detailed message for email notifications.
* **Structured Logging:** Replace all `print()` statements with calls to the `logging` module (e.g., `logger.info()`, `logger.error()`).
#### **3. Naming & Conventions**
* **Critique:**
* **Naming Convention:** Variable names like `acClient`, `dbclient`, and `ctr_record` do not follow the standard Python `snake_case` convention.
* **Duplicated Keys:** The response dictionary in `lambda_handler` has a redundant `statuscode` key.
* **Recommendation:**
* **Consistency:** Adhere to PEP 8 standards. Use `snake_case` for all variables and functions (e.g., `ac_client`, `db_client`, `ctr_record`).
* **Clarity:** Correct the typo in the response dictionary (`"statusCode": 200`).
#### **4. Data Handling & Security**
* **Critique:**
* **Hardcoded Values:** The list of required survey fields is hardcoded. If the survey attributes change, the code must be redeployed.
* **Magic Strings:** Keys like `"INBOUND"` or `"surveyTaken"` are "magic strings" scattered throughout the code.
* **Recommendation:**
* **Configuration:** Define hardcoded values and required keys as constants at the top of the file or, for greater flexibility, load them from a configuration file or a service like AWS Systems Manager Parameter Store.
* **Defensive Programming:** Use `.get()` with a default value when accessing nested dictionary keys (e.g., `ctr_record.get('kinesis', {}).get('data')`) to prevent `KeyError` exceptions.
#### **5. Code Structure & Best Practices**
* **Critique:**
* **Global State:** The extensive use of global variables makes the code difficult to reason about and test.
* **Redundant Checks:** The check for `HTTPStatusCode` after a DynamoDB `put_item` call is redundant. A successful `put_item` operation will not raise an exception, while a failed one will. The code should rely on exception handling instead of checking the response metadata.
* **Recommendation:**
* **Encapsulation:** Consider using a class to encapsulate the state and behavior related to the survey processing.
* **Pythonic Code:** Simplify verbose constructs like `data = str(data) if not isinstance(data, str) else data` to the more concise `data = str(data)`.
---
### **Action Items for the Developer**
1. **Refactor `main_logic`:** Break it into smaller, testable functions with clear responsibilities.
2. **Improve Error Handling:**
* Replace bare `except` blocks with specific exception types.
* Use the `logging` module instead of `print()`.
* Enhance the error email to include the full stack trace.
3. **Adhere to PEP 8:** Rename variables to follow `snake_case`.
4. **Remove Magic Strings:** Define constants for keys like `"INBOUND"` and the survey question names.
5. **Increase Robustness:** Use `.get()` for safe dictionary access. Remove redundant `HTTPStatusCode` checks by relying on exception handling.
6. **Review Global Variables:** Consider how to make the code more modular and testable by passing dependencies as function arguments.
---
### Business Logic
The core business purpose of this AWS Lambda function is to collect and store customer satisfaction survey data from Amazon Connect call interactions.
Here's a breakdown of the business logic:
1. **Data Source:** The function is triggered by an event, which is expected to be a stream of Amazon Connect Contact Trace Records (CTRs) from a Kinesis stream. These records contain detailed information about each call, including call attributes.
2. **Survey Identification:** The function processes each CTR to determine if it contains a completed survey. It specifically looks for:
* A call initiated as "INBOUND".
* An attribute named `"surveyTaken"` with a value of `"true"`.
3. **Data Extraction:** If a survey is identified, the function extracts specific customer satisfaction metrics and associated call details. The key data points extracted are:
* **Survey Questions:**
* `"PAC-Customer-Satisfaction-Survey-Question-1"`
* `"PAC-Customer-Satisfaction-Survey-Question-2"`
* `"PAC-Customer-Satisfaction-Survey-Question-3"`
* **Identifiers:**
* `contactID`: A unique identifier for the call.
* `surveyStatus`: The status of the survey.
* `CustomerEndpoint.Address`: The customer's phone number.
* **Agent Information:**
* Agent `ARN` (used to get `agentIdentifier`).
* Agent `Username`.
4. **Data Enrichment:** The function enriches the extracted data by making an additional call to the Amazon Connect API to get the agent's full name (`FirstName`, `LastName`) based on their `UserId`.
5. **Timestamp Conversion:** It takes the call's `LastUpdateTimestamp` and converts it to a specific timezone (`America/New_York`) to ensure all survey timestamps are stored consistently.
6. **Data Persistence:** The combined and enriched data is then stored in a DynamoDB table, with `ContactID` serving as the primary key. This persistent storage makes the survey data available for reporting, analytics, and other business processes.
7. **Error and Notification:** If any step of the process fails (e.g., missing data, invalid format, DynamoDB write error), the function sends an email notification to a specified address. This ensures that the operations team is immediately alerted to data processing issues.
In essence, the business value is to automate the capture and storage of customer feedback from calls, providing a structured dataset that can be used to monitor agent performance, identify areas for service improvement, and track customer satisfaction over time.
---
### Technical Logic
The technical logic of the code is implemented as an AWS Lambda function triggered by a Kinesis stream.
1. **Environment Setup:**
* The Lambda function's handler is `lambda_handler`.
* It initializes AWS clients (`boto3`) for SES, Connect, and DynamoDB outside the handler to reuse them across multiple invocations (improving performance and reducing overhead).
* It retrieves configuration values (table names, instance IDs, email addresses, timezone) from environment variables, which is a standard best practice for cloud functions.
2. **`lambda_handler` (Entry Point):**
* This function serves as the main entry point for the Lambda invocation.
* It wraps a call to `main_logic` in a `try...except` block.
* If `main_logic` executes successfully, it returns a standard `200` status code.
* If any `Exception` occurs in `main_logic`, it catches it, logs the error, sets the response status code to `400`, and sends an error notification email using `send_plain_email`.
3. **`main_logic` (Core Processing):**
* It iterates through each `record` in the `event["Records"]` list.
* **Data Decoding:** For each record, it checks for the presence of a Kinesis data stream and then decodes the `base64` encoded data.
* **JSON Parsing:** The decoded data, which is a UTF-8 string, is then parsed as JSON to get the `ctr_data` dictionary.
* **Condition Checking:** It performs a series of conditional checks:
* It checks for the presence of a `"past_ctr_updated_at"` attribute to avoid reprocessing old or duplicate CTRs.
* It verifies that the `InitiationMethod` is `"INBOUND"` and that a survey was taken (`"surveyTaken": "true"`).
* **Data Validation:** If a survey is found, it validates the presence of required survey fields using a list of hardcoded keys. It raises a `KeyError` if any are missing.
* **API Calls:**
* It extracts the `agentIdentifier` from the agent's ARN by splitting the string.
* It makes a synchronous call to `acClient.describe_user()` to get additional agent details.
* **Data Transformation:** It parses the `LastUpdateTimestamp` string into a `datetime` object and then localizes it to `America/New_York` using `ZoneInfo`.
* **Database Interaction:** It prepares a dictionary (`Item`) with all the extracted and enriched data. It then calls `surveyTableName.put_item()` to write the item to the DynamoDB table.
* **Error Handling (Record Level):** Each record's processing is wrapped in its own `try...except` block. This ensures that a failure in one record does not prevent the processing of other records in the same batch. Any error caught at this level also triggers an email notification.
* The function returns a success message in a JSON body if the processing completes.
4. **`send_plain_email` (Notification):**
* This utility function takes a message, recipient list, sender, and other details.
* It constructs a subject line that indicates if the email is an error notification and includes the Lambda name and contact ID.
* It uses the `ses_client.send_email()` method to send a simple text-based email.
* It checks the `HTTPStatusCode` of the response to determine if the email was successfully sent.
5. **`print_log` (Logging):**
* This is a custom logging function that pretty-prints JSON data or strings. While it is a custom implementation, it is used to log the state of the system for debugging purposes.
### General Comments
This code appears to be a Lambda function designed to process Amazon Connect Contact Trace Records (CTRs) from a Kinesis stream. It extracts customer survey data, and then stores this data in a DynamoDB table. It also includes error handling and notification mechanisms using SES.
The overall goal of the code is clear. However, there are several areas where the code could be improved to align with best practices at a FAANG company. These improvements will make the code more readable, maintainable, testable, and robust.
### Major Review Points
1. **Monolithic Structure and Lack of Modularity:** The code is structured as a single file with several functions. The `main_logic` function is particularly long and complex, handling multiple responsibilities:
* Iterating through Kinesis records.
* Decoding and parsing data.
* Checking for specific conditions (`past_ctr_updated_at`, `surveyTaken`).
* Validating data.
* Calling AWS APIs (Connect, DynamoDB).
* Error handling and notification.
This makes the code difficult to read, test, and debug. Consider refactoring this into smaller, single-purpose functions or classes. For example, a `process_kinesis_record` function could handle the logic for a single record, and a `store_survey_data` function could encapsulate the DynamoDB interaction. This also makes it easier to write unit tests for each individual piece of logic.
2. **Inconsistent Error Handling:** The error handling is inconsistent.
* In `lambda_handler`, a broad `except Exception` block catches all errors and sends an email. This is a reasonable top-level catch-all, but it hides the specific cause of the error.
* Inside `main_logic`, there's another `try...except` block for each record. This is a good pattern, as it allows the function to continue processing other records even if one fails. However, it also uses a broad `except Exception` which could mask issues.
* The `print_log` function has a `try...except` block with a bare `except`, which is an anti-pattern. This will catch `KeyboardInterrupt`, `SystemExit`, and other serious exceptions that should not be suppressed. It's better to catch specific exceptions or, if none are known, at least `except Exception`.
A better approach would be to:
* Use more specific exception types (`KeyError`, `ValueError`, etc.) where appropriate.
* Log more detailed information about the error, including the stack trace, which is crucial for debugging in production. The current logging only prints `str(e)`.
* Consider using a structured logging library (e.g., `logging`) instead of `print()`. Structured logs are much easier to search and analyze with tools like CloudWatch Logs Insights.
3. **Use of Global Variables:** The code relies heavily on global variables (`ses_client`, `acClient`, `dbclient`, `InstanceId`, `surveyTableName`, `timeZone`, etc.). This makes the code harder to test, as you can't easily mock these dependencies. A better approach is to pass these dependencies as arguments to the functions that need them.
4. **Hardcoded Values:**
* `timeZone = "America/New_York"` is a hardcoded string. This should be configurable via an environment variable to make the code more portable.
* The DynamoDB table name is loaded from an environment variable, which is good, but the AWS clients are initialized globally. While this is a common pattern for Lambda to reuse connections, it's worth noting that if the Lambda function needs to connect to multiple regions or accounts, this approach would need to be changed.
* The list of `required_fields` is hardcoded. If the survey questions change, this code will need to be updated and redeployed. A more flexible solution might be to get these from a configuration file or another service like Parameter Store.
5. **Code Duplication:**
* The line `response = {"statusCode": 200, "statuscode": 200}` has a duplicated key (`statusCode` and `statuscode`). This is a typo that should be corrected.
* The `send_plain_email` function is called from multiple places with similar parameters. This is good, but the error message strings are hardcoded at the call site. Consider passing a more structured error object to the email function so it can format a consistent, detailed message.
### Minor Review Points
1. **Linting and Formatting:** The code could benefit from a linter like `flake8` or `pylint` and a formatter like `black`.
* Variable names like `acClient` and `ctr_record` don't follow the standard Python `snake_case` naming convention. `ac_client` and `ctr_record` would be preferred.
* The `print` statements could be replaced with a proper logging framework.
* Indentation and spacing could be more consistent (e.g., the code uses 4 spaces, but some lines have different spacing).
2. **API Response Checking:** The code checks for `insertdata["ResponseMetadata"]["HTTPStatusCode"] == 200`. While this works, a more robust way to check for a successful DynamoDB `put_item` operation is to handle the potential exceptions from the API call itself. If the call succeeds, you can assume the item was put. The `put_item` method will raise an exception on failure, so a successful return means the operation was successful. The current check is a bit redundant if you're already in a `try` block.
3. **Magic Strings:**
* `"surveyTaken"`, `"INBOUND"`, `"LastUpdateTimestamp"`, etc. are "magic strings" that are repeated throughout the code. These should be defined as constants at the top of the file to prevent typos and make the code easier to update.
* The survey question names (`"PAC-Customer-Satisfaction-Survey-Question-1"`) are very long and cumbersome. These could be mapped to more readable variable names or a dictionary lookup.
4. **Redundant Code:**
* `data = str(data) if not isinstance(data, str) else data` in `send_plain_email` is a bit verbose. A simpler way to achieve this is `data = str(data)`.
* `if "kinesis" in ctr_record and "data" in ctr_record["kinesis"]:` is a good start, but a more Pythonic and safer way to access nested dictionaries is using `.get()` with a default value. For example, `kinesis_data = ctr_record.get('kinesis', {}).get('data')`. This avoids `KeyError` exceptions and simplifies the logic.
### Refactored Code Example (Illustrative)
Here's a small example of how some of the refactoring could look. This is not a complete rewrite, but shows the direction for improvement.
```python
# ... imports ...
import logging
# Configure logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Constants (e.g., from a config file or env vars)
SURVEY_TABLE_NAME = os.environ.get("DYNAMODB_SURVEY_TABLE", "").strip()
CONNECT_INSTANCE_ID = os.environ.get("INSTANCE_ID", "").strip()
TIME_ZONE = os.environ.get("TIMEZONE", "America/New_York")
TO_ADDRESSES = [addr.strip() for addr in os.environ.get("To_AWS_Address", "").split(",")]
FROM_ADDRESS = os.environ.get("From_AWS_Address", "").strip()
# AWS Clients
ses_client = boto3.client("ses")
connect_client = boto3.client("connect")
dynamodb_resource = boto3.resource("dynamodb")
survey_table = dynamodb_resource.Table(SURVEY_TABLE_NAME)
def handle_error(e: Exception, context, contact_id: str = None):
"""
Centralized error handling and notification.
"""
error_message = f"ERROR: {str(e)}\nStack Trace: {traceback.format_exc()}"
logger.error(error_message)
send_plain_email(
data=error_message,
to_addresses=TO_ADDRESSES,
from_address=FROM_ADDRESS,
lambda_name=context.function_name,
contact_id=contact_id,
is_error=True,
)
def decode_kinesis_data(data_str: str) -> dict:
"""Decodes and parses a Kinesis data string."""
try:
decoded_data = base64.b64decode(data_str).decode("utf-8")
return json.loads(decoded_data)
except (base64.binascii.Error, json.JSONDecodeError, UnicodeDecodeError) as e:
logger.error(f"Error decoding or parsing data: {e}")
return None
def store_survey_data(survey_data: dict):
"""Stores survey data in DynamoDB."""
try:
# Data validation and transformation here...
# ...
insert_data = survey_table.put_item(Item=survey_data)
logger.info(f"Survey inserted successfully: {insert_data}")
except Exception as e:
logger.error(f"Error inserting survey data: {e}")
raise # Re-raise the exception to be caught by the record-level handler
def process_ctr_record(record: dict, context):
"""Processes a single Kinesis record."""
contact_id = None
try:
kinesis_data_str = record.get("kinesis", {}).get("data")
if not kinesis_data_str:
logger.warning("Invalid or missing 'data' field in the Kinesis record.")
return
ctr_data = decode_kinesis_data(kinesis_data_str)
if not ctr_data:
return
attributes = ctr_data.get("Attributes", {})
if "past_ctr_updated_at" in attributes:
logger.info("Past CTR record found. Skipping processing.")
return
if ctr_data.get("InitiationMethod") == "INBOUND" and attributes.get("surveyTaken") == "true":
# Extract and validate fields
# ...
store_survey_data(survey_data_payload)
else:
logger.info("No survey to process for this record.")
except Exception as e:
handle_error(e, context, contact_id)
# Note: We don't re-raise here to continue processing the next record
def lambda_handler(event, context):
try:
for record in event["Records"]:
process_ctr_record(record, context)
return {"statusCode": 200, "body": json.dumps("Execution Completed")}
except Exception as e:
handle_error(e, context)
return {"statusCode": 400, "body": json.dumps({"error": str(e)})}
```
### Final Verdict
The current code is functional but could be significantly improved for production use in a large-scale environment. The primary focus should be on refactoring the code into smaller, more manageable units, centralizing error handling, and making the code more robust and testable by reducing reliance on global state and hardcoded values.
---
```python
import base64
import json
import os
from datetime import datetime
from zoneinfo import ZoneInfo
import boto3
ses_client = boto3.client("ses")
acClient = boto3.client("connect")
dbclient = boto3.resource("dynamodb")
InstanceId = os.environ.get("INSTANCE_ID", "").strip()
surveyTableName = dbclient.Table(os.environ.get("DYNAMODB_SURVEY_TABLE", "").strip())
timeZone = "America/New_York"
to_aws_addresses = os.environ.get("To_AWS_Address", "")
if to_aws_addresses:
to_aws_addresses = to_aws_addresses.strip().split(",")
from_aws_address = os.environ.get("From_AWS_Address", "")
if from_aws_address:
from_aws_address = from_aws_address.strip()
def print_log(data, data_label="", sort_keys=True, debug=True):
if not debug:
return
if isinstance(data, str) and data_label:
print(f"{data_label} = {data}")
return
if data_label:
print(f"{data_label} => ")
try:
print(json.dumps(data, sort_keys=sort_keys))
except:
print(data)
def lambda_handler(event, context):
response = {"statusCode": 200, "statuscode": 200}
try:
response = main_logic(event, context)
except Exception as e:
response["statusCode"] = 400
response["statuscode"] = 400
response["error"] = str(e)
print(json.dumps(response))
send_plain_email(
data=f"ERROR: {str(e)}\nSTATUS_CODE: 400",
to_addresses=to_aws_addresses,
from_address=from_aws_address,
lambda_name=context.function_name,
is_error=True,
)
return response
def main_logic(event, context):
print_log(event, "event")
contactID = None
for ctr_record in event["Records"]:
try:
if "kinesis" in ctr_record and "data" in ctr_record["kinesis"]:
data_str = ctr_record["kinesis"]["data"]
try:
decoded_data = base64.b64decode(data_str).decode("utf-8")
ctr_data = json.loads(decoded_data)
except Exception as e:
print(f"Error decoding or parsing data: {e}")
continue
else:
print("Invalid or missing 'data' field in the Kinesis record.")
continue
data = ctr_data
if "past_ctr_updated_at" in data.get("Attributes", {}):
print("Past CTR record found. Skipping processing.")
continue
if not data:
send_plain_email(
data="ctrData not found",
to_addresses=to_aws_addresses,
from_address=from_aws_address,
contact_id="",
is_error=True,
lambda_name=context.function_name,
)
continue
if "surveyTaken" in data.get("Attributes", {}):
if data.get("InitiationMethod") == "INBOUND" and data["Attributes"].get("surveyTaken") == "true":
attributes = data.get("Attributes", {})
required_fields = [
"PAC-Customer-Satisfaction-Survey-Question-1",
"PAC-Customer-Satisfaction-Survey-Question-2",
"PAC-Customer-Satisfaction-Survey-Question-3",
"contactID",
"surveyStatus"
]
missing = [field for field in required_fields if field not in attributes]
if missing:
raise KeyError(f"Missing required survey fields: {', '.join(missing)}")
CustomerSatisfaction = attributes["PAC-Customer-Satisfaction-Survey-Question-1"]
informationShared = attributes["PAC-Customer-Satisfaction-Survey-Question-2"]
overallExperience = attributes["PAC-Customer-Satisfaction-Survey-Question-3"]
contactID = attributes["contactID"]
surveyStatus = attributes["surveyStatus"]
customerPhone = data.get("CustomerEndpoint", {}).get("Address", "")
if not customerPhone:
raise KeyError("Missing CustomerEndpoint.Address")
agentIdentifier = data.get("Agent", {}).get("ARN", "").split(":")[5].split("/")[3]
agentUserName = data.get("Agent", {}).get("Username", "Unknown")
resp = acClient.describe_user(UserId=agentIdentifier, InstanceId=InstanceId)
agentFirstName = resp["User"]["IdentityInfo"].get("FirstName", "")
agentLastName = resp["User"]["IdentityInfo"].get("LastName", "")
TimeStamp = str(
datetime.strptime(data["LastUpdateTimestamp"], "%Y-%m-%dT%H:%M:%S%z")
.astimezone(ZoneInfo(timeZone))
)
insertdata = surveyTableName.put_item(
Item={
"ContactID": contactID,
"TimeStamp_in_EST": TimeStamp,
"PhoneNumber": customerPhone,
"OverallExperience": overallExperience,
"informationShared": informationShared,
"CustomerSatisfaction": CustomerSatisfaction,
"surveyStatus": surveyStatus,
"agentUserName": agentUserName,
"agentFirstName": agentFirstName,
"agentLastName": agentLastName,
}
)
print(insertdata)
if insertdata["ResponseMetadata"]["HTTPStatusCode"] == 200:
print("survey inserted")
else:
print("Error: survey not inserted")
send_plain_email(
data="survey not inserted",
to_addresses=to_aws_addresses,
from_address=from_aws_address,
contact_id=contactID,
is_error=True,
lambda_name=context.function_name,
)
else:
print("No Survey to process")
else:
print("No Survey for this call")
except Exception as e:
print(e)
send_plain_email(
data=str(e),
to_addresses=to_aws_addresses,
from_address=from_aws_address,
contact_id=contactID,
is_error=True,
lambda_name=context.function_name,
)
return {
"statusCode": 200,
"body": json.dumps("Execution Completed for Survey Lambda"),
}
def send_plain_email(
data,
to_addresses: list = None,
from_address: str = None,
lambda_name: str = None,
contact_id: str = None,
is_error=False,
):
data = str(data) if not isinstance(data, str) else data
if not (to_addresses and from_address):
return f"ERROR: Either 'to_address' or 'from_address' are missing, {to_addresses = } | {from_address = }"
subject_data = "AWS LAMBDA ERROR NOTIFICATION" if is_error else "AWS LAMBDA NOTIFICATION"
if lambda_name:
subject_data += f" | {lambda_name}"
if contact_id:
data += f"\n\n ContactID: {contact_id}."
subject_data += f" | {contact_id}"
response = ses_client.send_email(
Destination={"ToAddresses": to_addresses},
Message={
"Body": {"Text": {"Charset": "UTF-8", "Data": data}},
"Subject": {"Charset": "UTF-8", "Data": subject_data},
},
Source=from_address,
)
if response["ResponseMetadata"]["HTTPStatusCode"] == 200:
return "SENT"
else:
print("ERROR: Email not sent")
return "NOT SENT"
```