diff --git a/tech_docs/database/SQLite3.md b/tech_docs/database/SQLite3.md new file mode 100644 index 0000000..f1d75b9 --- /dev/null +++ b/tech_docs/database/SQLite3.md @@ -0,0 +1,69 @@ +Certainly! Working with SQLite3 in Python involves several key steps, from connecting to a SQLite database to performing database operations and finally closing the connection. Below is a basic outline and explanation of a Python script that uses SQLite3 for database operations. This script demonstrates how to define a database, connect to it, create a table, insert data, query data, and handle transactions with commit/rollback, and finally close the connection. + +```python +import sqlite3 + +# Define and connect to the database +# This will create the database file if it does not already exist +conn = sqlite3.connect('example.db') + +# Create a cursor object using the cursor() method +cursor = conn.cursor() + +# Define a table +# If the table already exists, this command will be ignored +cursor.execute('''CREATE TABLE IF NOT EXISTS inventory + (item_id INTEGER PRIMARY KEY, name TEXT, quantity INTEGER)''') + +# Insert data into the table +cursor.execute('''INSERT INTO inventory (name, quantity) + VALUES ('Apples', 30), ('Bananas', 45), ('Oranges', 20)''') + +# Commit the transaction (if necessary) +# If you're performing operations that modify the data, you need to commit +conn.commit() + +# Query the database +cursor.execute('''SELECT * FROM inventory''') +for row in cursor.fetchall(): + print(row) + +# Handling transactions with commit/rollback +try: + # Perform some database operations + cursor.execute('''UPDATE inventory SET quantity = 25 WHERE name = 'Apples' ''') + # More operations... + + # Commit if everything is fine + conn.commit() +except sqlite3.Error as e: + # Rollback on error + print(f"An error occurred: {e}") + conn.rollback() + +# Close the cursor and connection to the database +cursor.close() +conn.close() +``` + +Here's what each part of the script does: + +1. **Import SQLite3**: The `sqlite3` module is imported to use SQLite database functionalities. + +2. **Connect to Database**: The `connect` function is used to connect to an SQLite database. It takes the database file name as an argument. If the file doesn't exist, SQLite will create it. + +3. **Creating a Cursor Object**: A cursor object is created using the `cursor()` method. The cursor is used to execute SQL commands. + +4. **Create Table**: The `execute` method of the cursor is used to execute SQL commands. Here, it's used to create a new table if it doesn't already exist. + +5. **Insert Data**: Inserts data into the table. SQLite supports inserting multiple records in a single command. + +6. **Commit Transaction**: If you've performed operations that modify the database, you must commit these changes to make them permanent. + +7. **Query Data**: Executes a SELECT statement to fetch all records from the table, which are then printed out. + +8. **Handling Transactions with Commit/Rollback**: Demonstrates error handling in transactions. If an error occurs during a database operation, the changes are rolled back. + +9. **Close Cursor and Connection**: Finally, the cursor and the connection to the database are closed. + +This script forms a basic template for performing database operations with SQLite in Python. Depending on your needs, you can modify and expand upon this template, such as by adding more complex queries, using parameters in your SQL commands to avoid SQL injection, and handling more sophisticated error scenarios. \ No newline at end of file diff --git a/tech_docs/database/sql_notes.md b/tech_docs/database/sql_notes.md new file mode 100644 index 0000000..5b89e72 --- /dev/null +++ b/tech_docs/database/sql_notes.md @@ -0,0 +1,1103 @@ +# Fundamentals of SQL: A Concise Overview + +SQL, or Structured Query Language, is the standard language for relational database management and data manipulation. It's divided into various categories, each serving a specific aspect of database interaction: Data Manipulation Language (DML), Data Definition Language (DDL), Data Control Language (DCL), and Transaction Control Language (TCL). + +## Data Manipulation Language (DML) + +DML commands are pivotal for day-to-day operations on data stored within database tables. + +- **SELECT**: Retrieves data from one or more tables, supporting operations like sorting (`ORDER BY`) and filtering (`WHERE`). +- **INSERT**: Adds new rows to a table, specifying columns and corresponding values. +- **UPDATE**: Alters existing records in a table based on specified conditions, allowing changes to one or multiple rows. +- **DELETE**: Eliminates specified rows from a table, with the capability to delete all rows when conditions are omitted or generalized. + +## Data Definition Language (DDL) + +DDL commands focus on the structural blueprint of the database, facilitating the creation and modification of schemas. + +- **CREATE**: Initiates new database objects, like tables or views, defining their structure and relationships. +- **ALTER**: Adjusts existing database object structures, enabling the addition, modification, or deletion of columns and constraints. +- **DROP**: Completely removes database objects, erasing their definitions and data. +- **TRUNCATE**: Efficiently deletes all rows from a table, resetting its state without affecting its structure. + +## Data Control Language (DCL) + +DCL commands govern the access and permissions for database objects, ensuring secure data management. + +- **GRANT**: Assigns specific privileges to users or roles, covering actions like SELECT, INSERT, UPDATE, and DELETE. +- **REVOKE**: Withdraws previously granted privileges, tightening control over database access. + +## Transaction Control Language (TCL) + +TCL commands provide control over transactional operations, ensuring data integrity and consistency through atomic operations. + +- **COMMIT**: Finalizes the changes made during a transaction, making them permanent and visible to all subsequent transactions. +- **ROLLBACK**: Undoes changes made during the current transaction, reverting to the last committed state. +- **SAVEPOINT**: Establishes checkpoints within a transaction, to which one can revert without affecting the entire transaction. +- **SET TRANSACTION**: Specifies transaction properties, including isolation levels which dictate visibility between concurrent transactions and access mode (read/write). + +Understanding and effectively utilizing these SQL command categories enhances database management, promotes data integrity, and supports robust data manipulation and access control strategies. Each plays a vital role in the comprehensive management of relational databases, catering to various needs from basic data handling to complex transaction management and security enforcement. + +--- + +When facing complaints about a slow database, where the presumption is a database issue, it's crucial to approach troubleshooting systematically. Performance issues can stem from a myriad of factors, from query inefficiency, hardware limitations, to configuration missettings. This advanced technical guide aims to equip database administrators (DBAs) and developers with strategies to diagnose and resolve database performance bottlenecks. + +# Advanced Technical Guide: Troubleshooting a Slow Database + +## Step 1: Initial Assessment + +### 1.1 **Identify Symptoms** +- Gather specific complaints: long-running queries, slow application performance, timeouts. +- Determine if the issue is global (affecting all queries) or localized (specific queries or operations). + +### 1.2 **Monitor Database Performance Metrics** +- Utilize built-in database monitoring tools to track CPU usage, memory utilization, I/O throughput, and other relevant metrics. +- Identify abnormal patterns: spikes in CPU or I/O, memory pressure, etc. + +## Step 2: Narrow Down the Issue + +### 2.1 **Analyze Slow Queries** +- Use query logs or performance schemas to identify slow-running queries. +- Analyze execution plans for these queries to pinpoint inefficiencies (full table scans, missing indexes, etc.). + +### 2.2 **Check Database Configuration** +- Review configuration settings that could impact performance: buffer pool size, max connections, query cache settings (if applicable). +- Compare current configurations against recommended settings for your workload and DBMS. + +### 2.3 **Assess Hardware and Resource Utilization** +- Determine if the hardware (CPU, RAM, storage) is adequate for your workload. +- Check for I/O bottlenecks: slow disk access times, high I/O wait times. +- Monitor network latency and bandwidth, especially in distributed database setups. + +## Step 3: Systematic Troubleshooting + +### 3.1 **Query Optimization** +- Optimize slow-running queries: add missing indexes, rewrite inefficient queries, and consider query caching where applicable. +- Evaluate the use of more efficient data types and schema designs to reduce data footprint and improve access times. + +### 3.2 **Database Maintenance** +- Perform routine database maintenance: update statistics, rebuild indexes, and purge unnecessary data to keep the database lean and efficient. +- Consider partitioning large tables to improve query performance and management. + +### 3.3 **Configuration Tuning** +- Adjust database server configurations to better utilize available hardware resources. This might involve increasing buffer pool size, adjusting cache settings, or tuning connection pools. +- Implement connection pooling and manage database connections efficiently to avoid overhead from frequent disconnections and reconnections. + +### 3.4 **Scale Resources** +- If hardware resources are identified as a bottleneck, consider scaling up (more powerful hardware) or scaling out (adding more nodes, if supported). +- Explore the use of faster storage solutions (e.g., SSDs over HDDs) for critical databases. + +### 3.5 **Application-Level Changes** +- Review application logic for unnecessary database calls or operations that could be optimized. +- Implement caching at the application level to reduce database load for frequently accessed data. + +## Step 4: Review and Continuous Monitoring + +### 4.1 **Implement Monitoring Solutions** +- Set up comprehensive monitoring that covers database metrics, system performance, and application performance to quickly identify future issues. +- Use alerting mechanisms for proactive issue detection based on thresholds. + +### 4.2 **Regular Reviews** +- Conduct regular performance reviews to identify potential issues before they become critical. +- Keep documentation of configurations, optimizations, and known issues for future reference. + +## Conclusion + +Troubleshooting a slow database requires a methodical approach to identify and rectify the root causes of performance issues. By systematically assessing and addressing each potential area of concern—from query performance and schema optimization to hardware resources and configuration settings—DBAs can significantly improve database performance. Continuous monitoring and regular maintenance are key to ensuring sustained database health and performance, allowing for proactive rather than reactive management of the database environment. + +--- + +Crafting efficient SQL queries and troubleshooting slow queries are critical skills for optimizing database performance and ensuring the responsiveness of applications that rely on database operations. This advanced guide delves into strategies for writing high-performance SQL queries and methodologies for diagnosing and improving the performance of slow queries. + +# Advanced Guide to Crafting Efficient SQL Queries and Troubleshooting + +## Writing Efficient SQL Queries + +### 1. **Understand Your Data and Database Structure** +- Familiarize yourself with the database schema, indexes, and the data distribution within tables (e.g., through histograms). + +### 2. **Make Use of Indexes** +- Utilize indexes on columns that are frequently used in `WHERE`, `JOIN`, `ORDER BY`, and `GROUP BY` clauses. However, be mindful that excessive indexing can slow down write operations. + +### 3. **Optimize JOINs** +- Use the appropriate type of JOIN for your query. Prefer `INNER JOIN` over `OUTER JOIN` when possible, as it is generally more efficient. +- Ensure that the joined tables have indexes on the joined columns. + +### 4. **Limit the Data You Work With** +- Be specific about the columns you select—avoid using `SELECT *`. +- Use `WHERE` clauses to filter rows early and reduce the amount of data processed. + +### 5. **Use Subqueries and CTEs Wisely** +- Common Table Expressions (CTEs) can improve readability, but they may not always be optimized by the query planner. Test performance with and without CTEs. +- Materialized subqueries (in the `FROM` clause) can sometimes be optimized more efficiently than scalar or correlated subqueries. + +### 6. **Aggregate and Sort Efficiently** +- When using `GROUP BY`, limit the number of grouping columns and consider indexing them. +- Use `ORDER BY` judiciously, as sorting can be resource-intensive. Sort on indexed columns when possible. + +## Troubleshooting Slow Queries + +### 1. **Identify the Slow Query** +- Use logging tools or query performance monitoring features provided by your RDBMS to identify slow-running queries. + +### 2. **Analyze the Execution Plan** +- Most RDBMS offer query execution plans to understand how a query is executed. Look for full table scans, inefficient joins, and the use of indexes. + +### 3. **Optimize Data Access Patterns** +- Rewrite queries to access only the necessary data. Consider changing `JOIN` conditions, using subqueries, or restructuring queries to make them more efficient. + +### 4. **Review and Optimize Indexes** +- Ensure that your queries are using indexes efficiently. Adding, removing, or modifying indexes can significantly impact performance. +- Consider index types (e.g., B-tree, hash, full-text) and their suitability for your queries. + +### 5. **Optimize Query Logic** +- Simplify complex queries. Break down complex operations into simpler steps or multiple queries if it results in better performance. +- Use set-based operations instead of looping constructs when dealing with large datasets. + +### 6. **Database Configuration and Server Resources** +- Ensure that the database configuration is optimized for your workload. Parameters related to memory usage, file storage, and connection handling can impact performance. +- Assess if server resource constraints (CPU, memory, I/O) are bottlenecks. Upgrading hardware or balancing the load may be necessary. + +### 7. **Regular Maintenance** +- Perform regular maintenance tasks such as updating statistics, rebuilding indexes, and vacuuming (in PostgreSQL) to keep the database performing optimally. + +## Conclusion + +Efficient SQL query writing and effective troubleshooting of slow queries are fundamental to maintaining high database performance. By applying a thoughtful approach to query design, making judicious use of indexes, and systematically diagnosing performance issues through execution plans and database monitoring tools, developers and DBAs can ensure their databases support their application's needs with high efficiency. Regular review and optimization of queries and database settings are crucial as data volumes grow and application requirements evolve. + +--- + +Creating an advanced guide on SQL data types involves delving into the nuances of choosing the most appropriate and performance-optimized types for various scenarios. Understanding and making informed decisions about data types is crucial for database efficiency, data integrity, and optimal storage. This guide targets intermediate to advanced SQL users, focusing on common relational database management systems (RDBMS) like PostgreSQL, MySQL, SQL Server, and Oracle. + +# Advanced Guide on SQL Data Types and Their Selection + +## Numeric Types + +### Integer Types +- **Variants**: `INT`, `SMALLINT`, `BIGINT`, `TINYINT` +- **Use When**: You need to store whole numbers, either positive or negative. Choice depends on the range of values. +- **Considerations**: Smaller types like `SMALLINT` consume less space and can be more efficient, but ensure the range fits your data. + +### Decimal and Floating-Point Types +- **Variants**: `DECIMAL`, `NUMERIC`, `FLOAT`, `REAL`, `DOUBLE PRECISION` +- **Use When**: Storing precise decimal values (`DECIMAL`, `NUMERIC`) or when approximations are acceptable (`FLOAT`, `REAL`, `DOUBLE`). +- **Considerations**: `DECIMAL` and `NUMERIC` are ideal for financial calculations where precision matters. Floating-point types are suited for scientific calculations. + +## String Types + +### CHAR and VARCHAR +- **Variants**: `CHAR(n)`, `VARCHAR(n)`, `TEXT` +- **Use When**: Storing strings. Use `CHAR` for fixed-length strings and `VARCHAR` for variable-length strings. `TEXT` for long text fields without a specific size limit. +- **Considerations**: `CHAR` can waste storage space for shorter entries, while `VARCHAR` is more flexible. `TEXT` is useful for long-form text. + +### Binary Strings +- **Variants**: `BINARY`, `VARBINARY`, `BLOB` +- **Use When**: Storing binary data, such as images or files. +- **Considerations**: Choose based on the expected size of the data. `BLOB` types are designed for large binary objects. + +## Date and Time Types + +### DATE, TIME, DATETIME/TIMESTAMP +- **Use When**: Storing dates (`DATE`), times (`TIME`), or both (`DATETIME`, `TIMESTAMP`). +- **Considerations**: `TIMESTAMP` often includes timezone information, making it suited for applications needing time zone awareness. `DATETIME` does not store time zone data. + +### INTERVAL +- **Use When**: Representing durations or periods of time. +- **Considerations**: Useful for calculations over periods, e.g., adding a time interval to a timestamp. + +## Specialized Types + +### ENUM +- **Use When**: A column can only contain a small set of predefined values. +- **Considerations**: Improves data integrity but can be restrictive. Changing the ENUM list requires altering the table schema. + +### JSON and JSONB (PostgreSQL) +- **Use When**: Storing JSON data directly in a column. +- **Considerations**: `JSONB` stores data in a binary format, making it faster to access but slower to insert compared to `JSON`. Ideal for data with a non-fixed schema. + +### Spatial Data Types (GIS data) +- **Variants**: `GEOMETRY`, `POINT`, `LINESTRING`, `POLYGON`, etc. (Varies by RDBMS) +- **Use When**: Storing geographical data that represents points, lines, shapes, etc. +- **Considerations**: Requires understanding of GIS concepts and often specific extensions or support (e.g., PostGIS for PostgreSQL). + +## Advanced Considerations + +### Choosing the Right Type for Performance +- Precision matters: For numeric types, consider the range and precision required. Overestimating can lead to unnecessary storage and performance overhead. +- Text storage: Prefer `VARCHAR` over `CHAR` for most cases to save space, unless you're sure about the fixed length of the data. +- Use native types for special data: Leverage RDBMS-specific types like `JSONB` in PostgreSQL for better performance when working with JSON data. + +### Impact on Indexing and Search Performance +- Data types directly affect indexing efficiency and search performance. For instance, indexes on smaller numeric types are generally faster than those on larger numeric or string types. +- For searching, consider full-text search capabilities for large text fields, which can be more efficient than LIKE or regular expression patterns. + +## Conclusion + +Understanding the nuances of SQL data types and making informed choices based on the nature of the data, storage requirements, and query performance can significantly optimize database functionality and efficiency. This advanced guide aims to equip you with the knowledge to make those choices, ensuring data integrity and optimized performance across various use cases and RDBMS environments. + +--- + +To create a reference guide that provides context and a complete picture of SQL terms, particularly focusing on Data Manipulation Language (DML), Data Definition Language (DDL), and Data Control Language (DCL), it's important to understand what each of these terms means and how they are used in the context of managing and interacting with databases. This guide aims to flesh out these concepts with definitions and examples, providing a quick yet comprehensive refresher. + +# SQL Reference Guide: DML, DDL, and DCL + +## Data Manipulation Language (DML) + +DML is a subset of SQL used for adding (inserting), deleting, and modifying (updating) data in a database. DML commands do not alter the structure of the table itself, but rather, work with the data within tables. + +### SELECT +- **Purpose**: Retrieves data from one or more tables in a database. +- **Use Case**: Fetching user information from a `users` table. +- **Example**: `SELECT username, email FROM users WHERE isActive = 1;` + +### INSERT +- **Purpose**: Adds new rows (records) to a table. +- **Use Case**: Adding a new user to the `users` table. +- **Example**: `INSERT INTO users (username, email, isActive) VALUES ('john_doe', 'john@example.com', 1);` + +### UPDATE +- **Purpose**: Modifies existing data within a table. +- **Use Case**: Updating a user's email address in the `users` table. +- **Example**: `UPDATE users SET email = 'new_email@example.com' WHERE username = 'john_doe';` + +### DELETE +- **Purpose**: Removes rows from a table. +- **Use Case**: Removing a user from the `users` table. +- **Example**: `DELETE FROM users WHERE username = 'john_doe';` + +## Data Definition Language (DDL) + +DDL encompasses SQL commands used to define or modify the structure of the database schema. It deals with descriptions of the database schema and is used to create and modify the structure of database objects in the database. + +### CREATE +- **Purpose**: Creates new tables, views, or other database objects. +- **Use Case**: Creating a new table called `users`. +- **Example**: `CREATE TABLE users (id INT PRIMARY KEY, username TEXT, email TEXT, isActive INT);` + +### ALTER +- **Purpose**: Modifies the structure of an existing database object, like adding or deleting columns in a table. +- **Use Case**: Adding a new column `birthdate` to the `users` table. +- **Example**: `ALTER TABLE users ADD birthdate DATE;` + +### DROP +- **Purpose**: Deletes tables, views, or other database objects. +- **Use Case**: Removing the `users` table from the database. +- **Example**: `DROP TABLE users;` + +### TRUNCATE +- **Purpose**: Removes all records from a table, including all spaces allocated for the records but does not delete the table itself. +- **Use Case**: Deleting all records from the `users` table while keeping the table structure. +- **Example**: `TRUNCATE TABLE users;` + +## Data Control Language (DCL) + +DCL includes commands that control access to data in the database. It's used to manage permissions through roles and rights within the database environment. + +### GRANT +- **Purpose**: Gives user's access privileges to the database. +- **Use Case**: Granting a user read-only access to the `users` table. +- **Example**: `GRANT SELECT ON users TO 'read_only_user';` + +### REVOKE +- **Purpose**: Removes access privileges from a user. +- **Use Case**: Revoking all access from a user to the `users` table. +- **Example**: `REVOKE ALL PRIVILEGES ON users FROM 'former_employee';` + +## Conclusion + +Understanding DML, DDL, and DCL is crucial for anyone working with SQL databases, as they cover the spectrum of operations from manipulating data to defining the structure of database objects, and controlling access to data. This guide provides a clear overview of these key SQL language components, offering a solid foundation for refreshing knowledge or learning about SQL command categories. + +--- +Including Transaction Control Language (TCL) oversight was unintentional. TCL is indeed a crucial aspect of SQL that manages transaction control within the database. It plays a vital role in ensuring data integrity and consistency by managing transaction blocks. Let's expand our reference guide to include TCL and give a complete overview. + +## Transaction Control Language (TCL) + +TCL commands manage the changes made by DML statements. It allows users to control transactions in a database, ensuring that the database remains consistent even in cases of system failure or concurrent access scenarios. TCL commands help in providing a mechanism to either commit a transaction, making all its changes permanent, or rollback a transaction, undoing all changes made since the last commit. + +### COMMIT +- **Purpose**: Makes all changes made during the current transaction permanent and visible to other users. +- **Use Case**: After successfully inserting several records into a table as part of a transaction. +- **Example**: `COMMIT;` + +### ROLLBACK +- **Purpose**: Undoes all changes made in the current transaction, reverting the database state back to what it was before the transaction began. +- **Use Case**: Reverting changes due to an error encountered during a transaction. +- **Example**: `ROLLBACK;` + +### SAVEPOINT +- **Purpose**: Sets a savepoint within a transaction, which you can rollback to without aborting the entire transaction. It's like a checkpoint within a larger transaction. +- **Use Case**: Creating a logical save point within a transaction for a complex operation that may need partial undoing. +- **Example**: `SAVEPOINT savepoint_name;` + +### ROLLBACK TO SAVEPOINT +- **Purpose**: Rolls the transaction back to a specified savepoint, undoing all changes made after the savepoint was set, without terminating the entire transaction. +- **Use Case**: Undoing changes after encountering an error in a transaction past a certain point but not wanting to undo all changes made during the transaction. +- **Example**: `ROLLBACK TO savepoint_name;` + +### SET TRANSACTION +- **Purpose**: Places a name on a transaction. Primarily used in systems that support transaction naming for identifying transactions in database logs. +- **Use Case**: Customizing the isolation level for a transaction or specifying a transaction as read-only or read-write. +- **Example**: `SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;` + +## Conclusion + +TCL is essential for managing the state of transactions within a database, providing tools to commit, rollback, and manage changes effectively. By including TCL alongside DML, DDL, and DCL, our SQL reference guide now offers a more comprehensive overview of SQL's capabilities for managing data, schema objects, access permissions, and transaction integrity within a relational database management system. This inclusion ensures a well-rounded understanding necessary for proficient database operation and management. + +--- + +Creating a more complete SQL reference guide involves encompassing a broad range of SQL syntax, functions, best practices, and advanced concepts. This guide is designed to serve as a comprehensive overview for users at various levels of expertise, offering both a refresher for experienced users and a solid foundation for newcomers. + +# Comprehensive SQL Reference Guide + +## Fundamentals of SQL + +### Data Manipulation Language (DML) +- **SELECT**: Retrieves data from a database. +- **INSERT**: Inserts new data into a database table. +- **UPDATE**: Modifies existing data in a table. +- **DELETE**: Removes data from a table. + +### Data Definition Language (DDL) +- **CREATE**: Creates new tables, views, or other database objects. +- **ALTER**: Modifies the structure of an existing database object. +- **DROP**: Deletes tables, views, or other database objects. +- **TRUNCATE**: Removes all records from a table, including all spaces allocated for the records. + +### Data Control Language (DCL) +- **GRANT**: Gives user's access privileges to database. +- **REVOKE**: Removes access privileges from users. + +## Key SQL Statements and Clauses + +### SELECT Statement +- Basic syntax: `SELECT column1, column2 FROM table_name WHERE condition GROUP BY column ORDER BY column ASC|DESC;` + +### JOIN Clauses +- Types: `INNER JOIN`, `LEFT JOIN` (or `LEFT OUTER JOIN`), `RIGHT JOIN` (or `RIGHT OUTER JOIN`), `FULL JOIN` (or `FULL OUTER JOIN`). +- Used to combine rows from two or more tables, based on a related column between them. + +### Subqueries +- A query nested inside another query, used for complex queries. +- Can be used in `SELECT`, `FROM`, and `WHERE` clauses. + +## Advanced SQL Concepts + +### Indexes +- Used to speed up the retrieval of rows from a table. +- Important for improving query performance, especially for large datasets. + +### Transactions +- A set of SQL operations executed as a single unit of work. +- Must be Atomic, Consistent, Isolated, and Durable (ACID). + +### Views +- A virtual table based on the result-set of an SQL statement. +- Simplifies complex queries, enhances security, and abstracts underlying table structures. + +### Stored Procedures and Functions +- **Stored Procedures**: SQL code saved and executed as needed. +- **Functions**: Similar to stored procedures but can return a value. + +## SQL Functions + +### String Functions +- Examples: `CONCAT`, `LENGTH`, `SUBSTRING`, `UPPER`, `LOWER`. + +### Numeric Functions +- Examples: `ABS`, `CEIL`, `FLOOR`, `RAND`, `ROUND`. + +### Date and Time Functions +- Examples: `CURRENT_DATE`, `DATE_ADD`, `DATE_DIFF`, `YEAR`, `MONTH`, `DAY`. + +### Aggregate Functions +- Examples: `COUNT`, `SUM`, `AVG`, `MIN`, `MAX`. +- Often used with the `GROUP BY` clause. + +## Best Practices and Performance Optimization + +### Schema Design +- Normalize data to eliminate redundancy and ensure data integrity. +- Use appropriate data types for accuracy and efficiency. + +### Query Optimization +- Use indexes wisely to improve query performance. +- Avoid using `SELECT *`; specify only the needed columns. +- Write efficient JOINs and prefer WHERE clauses for filtering. + +### Security Practices +- Avoid SQL injection by using parameterized queries. +- Implement proper access controls using `GRANT` and `REVOKE`. + +## Conclusion + +This comprehensive SQL reference guide covers the essentials of SQL, from basic queries and DDL operations to more complex concepts like transactions, indexing, and performance optimization. Whether you're a beginner looking to understand the basics or an experienced practitioner seeking to refresh your knowledge on advanced topics, this guide provides a structured overview of SQL's capabilities and best practices. + +--- + +Preparing for SQL interviews requires a solid understanding of advanced SQL concepts, queries, and optimizations. This guide is designed to provide a concise overview of typical advanced SQL interview questions, offering quick refreshers on key topics. + +## Advanced SQL Interview Questions Guide + +### 1. **Window Functions** +- **Question**: Explain window functions in SQL. Provide examples where they are useful. +- **Refresher**: Window functions perform a calculation across a set of table rows related to the current row. Unlike GROUP BY, window functions do not cause rows to become grouped into a single output row. Common examples include `ROW_NUMBER()`, `RANK()`, `DENSE_RANK()`, and `NTILE()`, useful for tasks like ranking, partitioning, and cumulative aggregates. + +### 2. **Common Table Expressions (CTEs)** +- **Question**: What are Common Table Expressions and when would you use them? +- **Refresher**: CTEs allow you to name a temporary result set that you can reference within a SELECT, INSERT, UPDATE, or DELETE statement. They are useful for creating readable and maintainable queries by breaking down complex queries into simpler parts, especially when dealing with hierarchical or recursive data. + +### 3. **Indexes and Performance** +- **Question**: How do indexes work, and what are the trade-offs of using them? +- **Refresher**: Indexes improve the speed of data retrieval operations by providing quick access to rows in a database table. The trade-off is that they increase the time required for write operations (INSERT, UPDATE, DELETE) because the index must be updated. They also consume additional storage space. + +### 4. **Query Optimization** +- **Question**: Describe how you would optimize a slow-running query. +- **Refresher**: Optimization strategies include: + - Ensuring proper use of indexes. + - Avoiding SELECT * and being specific about the columns needed. + - Using JOINs instead of subqueries where appropriate. + - Analyzing and optimizing the query execution plan. + +### 5. **Transactions** +- **Question**: What is a database transaction, and what properties must it have (ACID)? +- **Refresher**: A transaction is a sequence of database operations that are treated as a single logical unit of work. It must be Atomic (all or nothing), Consistent (ensures data integrity), Isolated (independent from other transactions), and Durable (persists after completion). + +### 6. **Database Locking** +- **Question**: What is database locking? Explain optimistic vs. pessimistic locking. +- **Refresher**: Database locking is a mechanism to control concurrent access to a database to prevent data inconsistencies. Pessimistic locking locks resources as they are accessed, suitable for high-conflict scenarios. Optimistic locking allows concurrent access and checks at commit time if another transaction has modified the data, suitable for low-conflict environments. + +### 7. **Normalization vs. Denormalization** +- **Question**: Compare normalization and denormalization. When would you use each? +- **Refresher**: Normalization involves organizing data to reduce redundancy and improve data integrity. Denormalization adds redundancy to optimize read operations. Use normalization to design efficient schemas and maintain data integrity, and denormalization to optimize query performance in read-heavy applications. + +### 8. **SQL Injection** +- **Question**: What is SQL injection, and how can it be prevented? +- **Refresher**: SQL injection is a security vulnerability that allows an attacker to interfere with the queries that an application makes to its database. It can be prevented by using prepared statements and parameterized queries, escaping all user-supplied input, and practicing least privilege access control for database operations. + +### 9. **Data Types** +- **Question**: Discuss the importance of choosing appropriate data types in a database schema. +- **Refresher**: Appropriate data types ensure accurate data representation and efficient storage. They can affect performance, especially for indexing and joins, and influence the integrity of the data (e.g., using DATE types to ensure valid dates). + +### 10. **Subqueries vs. JOINs** +- **Question**: Compare subqueries with JOINs. When is each appropriate? +- **Refresher**: Subqueries can simplify complex joins and are useful when you need to select rows before joining. JOINs are generally faster and more efficient for straightforward joins of tables. The choice depends on the specific use case, readability, and performance. + +This advanced guide covers key topics and concepts that are often discussed in SQL interviews, offering a quick way to refresh your knowledge and prepare for challenging questions. + +--- + +Creating a guide that encapsulates the lifecycle of a SQL query—from its inception to its use in production—offers a comprehensive look at the process of working with SQL in real-world scenarios. This narrative will explore how queries are built, optimized, tested, and refined, as well as considerations for maintaining and updating queries over time. + +# The Lifecycle of a SQL Query: A Comprehensive Guide + +## Conceptualization and Design + +### 1. **Requirement Gathering** +- Understand the data retrieval or manipulation need. This could stem from application requirements, reporting needs, or data analysis tasks. + +### 2. **Schema Understanding** +- Familiarize yourself with the database schema, including table structures, relationships, indexes, and constraints. Tools like ER diagrams can be invaluable here. + +### 3. **Query Drafting** +- Begin drafting your SQL query, focusing on selecting the needed columns, specifying the correct tables, and outlining the initial conditions (WHERE clauses). + +## Development and Optimization + +### 4. **Environment Setup** +- Ensure you have a development environment that mirrors production closely to test your queries effectively. + +### 5. **Performance Considerations** +- As you build out your query, keep an eye on potential performance impacts. Consider the size of your data and how your query might scale. + +### 6. **Query Refinement** +- Use EXPLAIN plans (or equivalent) to understand how your database executes the query. Look for full table scans, inefficient joins, and opportunities to use indexes. + +### 7. **Iteration and Testing** +- Test your query extensively. This includes not only checking for correctness but also performance under different data volumes. + +## Review and Deployment + +### 8. **Code Review** +- Have your query reviewed by peers. Fresh eyes can spot potential issues or optimizations you might have missed. + +### 9. **Version Control** +- Use version control for your SQL queries, especially if they are part of application code or critical reports. + +### 10. **Deployment to Production** +- Follow your organization's deployment practices to move your query to production. This might involve migration scripts for schema changes or updates to application code. + +## Monitoring and Maintenance + +### 11. **Performance Monitoring** +- Keep an eye on how your query performs in the production environment. Use database monitoring tools to track execution times and resource usage. + +### 12. **Iterative Optimization** +- As data grows or usage patterns change, you might need to revisit and optimize your query. This could involve adding indexes, adjusting joins, or even redesigning part of your schema. + +### 13. **Documentation and Knowledge Sharing** +- Document your query, including its purpose, any assumptions made during its design, and important performance considerations. Share your findings and insights with your team. + +## Modification and Evolution + +### 14. **Adapting to Changes** +- Business requirements evolve, and so will your queries. Be prepared to modify your queries in response to new needs or changes in the underlying data model. + +### 15. **Refactoring and Cleanup** +- Over time, some queries may become redundant, or better ways of achieving the same results may emerge. Regularly review and refactor your SQL queries to keep your codebase clean and efficient. + +## Best Practices Throughout the Lifecycle + +- **Comment Your SQL**: Ensure your queries are well-commented to explain the "why" behind complex logic. +- **Prioritize Readability**: Write your SQL in a way that is easy for others (and future you) to understand. +- **Stay Informed**: Keep up with the latest features and optimizations available in your specific SQL dialect. + +## Conclusion + +The lifecycle of a SQL query is an iterative and evolving process. From initial drafting to deployment and ongoing optimization, each step involves critical thinking, testing, and collaboration. By following best practices and maintaining a focus on performance and readability, you can ensure that your SQL queries remain efficient, understandable, and aligned with business needs over time. +--- + +To enhance your SQL Style and Best Practices Guide, integrating the detailed insights on key SQL keywords with your established guidelines will create a comprehensive reference. This unified guide will not only cover stylistic and structural best practices but also delve into the strategic use of SQL keywords for data manipulation and query optimization. Here's how you can structure this expanded guide: + +# Unified SQL Style and Best Practices Guide + +This guide combines SQL coding best practices with a focus on the strategic use of key SQL keywords. It's designed for intermediate to advanced users aiming for efficiency, readability, maintainability, and performance in their SQL queries. + +## Formatting and Style +- **Case Usage**: Use uppercase for SQL keywords and lowercase for identifiers. +- **Indentation and Alignment**: Enhance readability with consistent indentation and alignment. +- **Comma Placement**: Choose and consistently use leading or trailing commas for column lists. +- **Whitespace**: Use generously to separate elements of your query. + +## Query Structure +- **Selecting Columns**: Prefer specifying columns over `SELECT *`. +- **Using Aliases**: Simplify notation and improve readability with aliases. +- **Joins**: Use explicit JOINs and meaningful ON conditions. +- **Where Clauses**: Use WHERE clauses for efficient row filtering. + +## Key SQL Keywords and Their Use Cases +- **SELECT**: Specify columns to return. +- **DISTINCT**: Remove duplicate rows. +- **TOP / LIMIT / FETCH FIRST**: Limit the number of rows returned. +- **WHERE**: Filter rows based on conditions. +- **ORDER BY**: Sort query results. +- **GROUP BY**: Group rows for aggregate calculations. +- **HAVING**: Filter groups based on aggregate results. +- **JOIN**: Combine rows from multiple tables. + +## Best Practices and Performance +- **Index Usage**: Leverage indexes for faster queries. +- **Query Optimization**: Use subqueries, CTEs, and EXISTS clauses judiciously. +- **Avoiding Common Pitfalls**: Be cautious with NULL values and function use in WHERE clauses. +- **Consistency**: Maintain it across naming, formatting, and structure. +- **Commenting and Documentation**: Use comments to explain complex logic and assumptions. + +## Advanced Techniques and Considerations +- **Subqueries and Common Table Expressions (CTEs)**: Utilize for complex data manipulation and to improve query clarity. +- **Performance Tuning**: Regularly review and optimize queries based on execution plans and database feedback. +- **Database-Specific Syntax**: Be aware of and utilize database-specific features and syntax for optimization and functionality. + +## Conclusion +A thorough understanding of SQL best practices, coupled with strategic use of key SQL keywords, is crucial for writing efficient, effective, and maintainable queries. This guide provides a solid foundation, but always be prepared to adapt and evolve your practices to meet the specific needs of your projects and the dynamics of your team. + +By integrating insights on key SQL keywords with structural and stylistic best practices, this guide aims to be a comprehensive reference for crafting sophisticated and efficient SQL queries. + +--- + +For a comprehensive "Page Two" of your SQL Style and Best Practices Guide, incorporating advanced concepts, security practices, and additional performance optimization techniques would create a holistic reference. This section aims to cover aspects beyond basic syntax and common keywords, delving into areas that are crucial for developing robust, secure, and highly performant SQL applications. + +# Advanced SQL Concepts and Security Practices + +## Advanced Data Manipulation + +### 1. **Window Functions** +- Provide powerful ways to perform complex calculations across sets of rows related to the current row, such as running totals, rankings, and moving averages. +- Example: `SELECT ROW_NUMBER() OVER (ORDER BY column_name) FROM table_name;` + +### 2. **Common Table Expressions (CTEs)** +- Enable the creation of temporary result sets that can be referenced within a SELECT, INSERT, UPDATE, or DELETE statement. +- Facilitate more readable and modular queries, especially useful for recursive queries. +- Example: `WITH cte_name AS (SELECT column_name FROM table_name) SELECT * FROM cte_name;` + +## Query Performance Optimization + +### 3. **Execution Plan Analysis** +- Understanding and analyzing SQL execution plans to identify performance bottlenecks. +- Tools and commands vary by database system but are essential for tuning queries. + +### 4. **Index Management** +- Beyond basic index usage, understanding index types (e.g., B-tree, hash, GIN, GiST in PostgreSQL) and their appropriate use cases. +- The impact of indexing on write operations and strategies for index maintenance. + +## Security Practices + +### 5. **SQL Injection Prevention** +- Use parameterized queries or prepared statements to handle user input. +- Example: Avoiding direct string concatenation in queries and using binding parameters. + +### 6. **Principle of Least Privilege** +- Ensure database users and applications have only the necessary permissions to perform their functions. +- Regularly review and audit permissions. + +### 7. **Data Encryption** +- Use encryption at rest and in transit to protect sensitive data. +- Understand and implement database and application-level encryption features. + +## Additional Considerations + +### 8. **Database-Specific Features and Extensions** +- Be aware of and leverage database-specific syntax, functions, and extensions for advanced use cases (e.g., JSON handling, geospatial data). + +### 9. **Testing and Version Control** +- Implement testing strategies for SQL queries and database schemas. +- Use version control systems to manage changes to database schemas and SQL scripts. + +### 10. **Continuous Integration/Continuous Deployment (CI/CD) for Databases** +- Apply CI/CD practices to database schema changes and migrations to ensure smooth deployment processes and maintain database integrity across environments. + +## Conclusion + +This extended guide emphasizes the importance of advanced SQL techniques, performance optimization, security practices, and the adaptability of SQL strategies to specific database systems and applications. It's designed to be a living document, encouraging continuous learning and adaptation to new technologies, methodologies, and best practices in the evolving landscape of SQL database management and development. + + +--- + +Creating a guide for JSON handling in SQL requires an understanding of how modern relational database management systems (RDBMS) incorporate JSON data types and functions. This guide focuses on providing you with the tools and knowledge to effectively store, query, and manipulate JSON data within an SQL environment. The specific examples and functions can vary between databases like PostgreSQL, MySQL, SQL Server, and others, so we'll cover some general concepts and then delve into specifics for a few popular systems. + +# JSON Handling in SQL Guide + +## Introduction to JSON in SQL + +JSON (JavaScript Object Notation) is a lightweight data interchange format. Many modern RDBMS support JSON data types, allowing you to store JSON documents directly in database tables and use SQL functions to interact with these documents. + +## General Concepts + +### 1. **Storing JSON Data** +- JSON data can typically be stored in columns specifically designed to hold JSON data types (`JSON` or `JSONB` in PostgreSQL, `JSON` in MySQL, and `JSON` in SQL Server). + +### 2. **Querying JSON Data** +- Most RDBMS that support JSON provide functions and operators to extract elements from JSON documents, allowing you to query inside a JSON column as if it were relational data. + +### 3. **Indexing JSON Data** +- Some databases allow indexing JSON data, which can significantly improve query performance on JSON columns. + +## Database-Specific Guides + +### PostgreSQL + +- **Data Types**: `JSON` and `JSONB`, with `JSONB` being a binary format that supports indexing. +- **Querying**: Use operators like `->`, `->>`, `@>`, and `#>>` to access and manipulate JSON data. +- **Indexing**: GIN (Generalized Inverted Index) indexes can be used on `JSONB` columns to improve query performance. + +### MySQL + +- **Data Types**: `JSON`, a binary format that allows efficient access to data elements. +- **Querying**: Use functions like `JSON_EXTRACT()`, `JSON_SEARCH()`, and `JSON_VALUE()` to access elements within a JSON document. +- **Indexing**: Virtual columns can be created to index JSON attributes indirectly. + +### SQL Server + +- **Data Types**: `JSON` data is stored in columns of type `nvarchar(max)`. +- **Querying**: Use the `JSON_VALUE()`, `JSON_QUERY()`, and `OPENJSON()` functions to extract data from JSON text. +- **Indexing**: Create indexes on computed columns that extract scalar values from JSON text. + +## Best Practices + +### Storing vs. Relational Data +- Decide between storing data as JSON or normalizing it into relational tables based on use cases, query performance, and application requirements. + +### Performance Considerations +- Use JSON data types judiciously, as querying and manipulating JSON data can be more resource-intensive than using traditional relational data. + +### Security +- Validate JSON data to avoid injection attacks and ensure data integrity. + +### Use of Functions and Operators +- Familiarize yourself with the JSON functions and operators provided by your RDBMS to efficiently query and manipulate JSON data. + +## Conclusion + +Handling JSON in SQL offers flexibility in storing and querying semi-structured data, bridging the gap between NoSQL and relational database features. By understanding the capabilities and limitations of JSON within your specific SQL database system, you can leverage the full power of SQL for data manipulation while accommodating complex data structures common in modern web applications. This guide serves as a starting point for effectively working with JSON data in SQL, encouraging further exploration of database-specific features and best practices. + +--- + +Creating a guide for handling JSON in SQLite3 requires an understanding of SQLite's unique approach to JSON data. Unlike some other RDBMS that have specific JSON data types, SQLite uses text data type to store JSON strings and provides a set of JSON functions for manipulating JSON data. This guide will introduce you to storing, querying, and manipulating JSON data within SQLite3, leveraging its JSON1 extension. + +# SQLite3 JSON Handling Guide + +## Introduction + +SQLite3, a lightweight disk-based database, supports JSON content through its JSON1 extension module. This allows for efficient storage and manipulation of JSON data within a relational database framework. + +## Enabling JSON1 Extension + +Ensure the JSON1 extension is enabled in your SQLite3 setup. In most distributions, JSON1 comes precompiled and ready to use. + +## Storing JSON Data + +In SQLite3, JSON data is stored in `TEXT` columns formatted as valid JSON strings. While there's no specific JSON data type, ensuring the text is a valid JSON string is crucial for utilizing the JSON functions effectively. + +```sql +CREATE TABLE example ( + id INTEGER PRIMARY KEY, + data TEXT +); +``` + +Ensure to insert valid JSON into the `data` column: + +```sql +INSERT INTO example (data) VALUES ('{"name": "John", "age": 30, "city": "New York"}'); +``` + +## Querying JSON Data + +SQLite3 offers a variety of functions to work with JSON data, such as `json_extract`, `json_object`, and `json_array`. + +### Extracting Data from JSON + +To get specific information from a JSON column, use `json_extract`: + +```sql +SELECT json_extract(data, '$.name') AS name FROM example; +``` + +This will return the value associated with the key `name` in the JSON document. + +### Modifying JSON Data + +SQLite3 allows you to modify JSON data using functions like `json_set`, `json_insert`, and `json_replace`. + +- **`json_set`**: Updates the value of an element if it exists or adds it if it doesn’t. + +```sql +UPDATE example +SET data = json_set(data, '$.age', 31) +WHERE json_extract(data, '$.name') = 'John'; +``` + +This updates John's age to 31. + +### Creating JSON Objects + +The `json_object` function lets you create JSON objects. This can be useful for aggregating query results into JSON format: + +```sql +SELECT json_object('name', name, 'age', age) FROM ( + SELECT 'John' AS name, 30 AS age +); +``` + +This returns a JSON object with name and age keys. + +### Aggregating JSON Data + +For aggregating multiple rows into a JSON array, use the `json_group_array` function: + +```sql +SELECT json_group_array(json_object('name', name, 'age', age)) +FROM (SELECT 'John' AS name, 30 AS age UNION SELECT 'Jane', 25); +``` + +This aggregates the results into a JSON array of objects. + +## Indexing JSON Data + +While SQLite3 does not directly index JSON data, you can create indexed expressions or virtual columns in a table that store extracted JSON values. This can significantly speed up queries: + +```sql +CREATE INDEX idx_name ON example (json_extract(data, '$.name')); +``` + +## Best Practices + +- **Valid JSON**: Ensure that the data inserted into JSON columns is valid JSON. +- **Schema Design**: Consider whether to store data as JSON or normalize it into relational tables based on your query needs and performance considerations. +- **Indexing Strategy**: Use indexing wisely to improve the performance of queries that access JSON data frequently. +- **Performance Considerations**: Complex JSON queries might be slower than equivalent queries on normalized data. Profile and optimize queries as needed. + +## Conclusion + +SQLite3's JSON1 extension provides robust support for JSON data, offering flexibility in how data is stored, queried, and manipulated. By understanding and utilizing the JSON functions available in SQLite3, you can efficiently integrate JSON data into your SQLite3-based applications, benefiting from both the flexibility of JSON and the reliability of SQLite3. + +--- + +Creating a guide focused on crafting SQL queries with an emphasis on best practices involves outlining principles that enhance readability, maintainability, and performance. This guide is designed to help developers at all levels write clear, efficient, and reliable SQL code. + +# Crafting SQL Queries: A Best Practice Guide + +## Planning and Design + +### 1. **Understand Your Data Model** +- Familiarize yourself with the database schema, relationships between tables, and data types. +- Use entity-relationship diagrams (ERD) or schema visualization tools to aid understanding. + +### 2. **Define Your Requirements** +- Clearly understand what data you need to retrieve, update, or manipulate. +- Consider the implications of your query on the database's performance and integrity. + +## Writing Queries + +### 3. **Selecting Data** +- **Be Specific**: Instead of using `SELECT *`, specify the column names to retrieve only the data you need. +- **Use Aliases**: When using tables or columns with long names, use aliases to improve readability. + +### 4. **Filtering Data** +- **Explicit Conditions**: Use clear and explicit conditions in `WHERE` clauses. Avoid overly complex conditions; consider breaking them down for clarity. +- **Parameterize Queries**: To prevent SQL injection and improve cacheability, use parameterized queries with placeholders for inputs. + +### 5. **Joining Tables** +- **Specify Join Type**: Always specify the type of join (e.g., `INNER JOIN`, `LEFT JOIN`) to make your intent clear. +- **Use Conditions**: Ensure that your join conditions are accurate to avoid unintentional Cartesian products. + +### 6. **Grouping and Aggregating** +- **Clear Aggregation**: When using `GROUP BY`, ensure that all selected columns are either aggregated or explicitly listed in the `GROUP BY` clause. +- **Having Clause**: Use the `HAVING` clause to filter groups after aggregation, not before. + +## Performance Optimization + +### 7. **Indexes** +- Understand which columns are indexed and craft your queries to leverage these indexes, especially in `WHERE` clauses and join conditions. +- Avoid operations on columns that negate the use of indexes, like functions or type conversions. + +### 8. **Avoiding Subqueries** +- When possible, use joins instead of subqueries as they are often more performant, especially for large datasets. +- Evaluate if common table expressions (CTEs) or temporary tables could offer better performance or readability. + +### 9. **Limiting Results** +- Use `LIMIT` (or `TOP`, depending on your SQL dialect) to restrict the number of rows returned, especially when testing queries on large datasets. + +## Code Quality and Maintainability + +### 10. **Formatting** +- Use consistent formatting for keywords, indentations, and alignment to improve readability. +- Consider using a SQL formatter tool or follow a style guide adopted by your team. + +### 11. **Commenting** +- Comment your SQL queries to explain "why" something is done, especially for complex logic. +- Avoid stating "what" is done, as the SQL syntax should be clear enough for that purpose. + +### 12. **Version Control** +- Keep your SQL scripts in version control systems alongside your application code to track changes and collaborate effectively. + +## Testing and Review + +### 13. **Test Your Queries** +- Test your queries for correctness and performance on a dataset similar in size and structure to your production dataset. +- Use explain plans to understand how your query is executed. + +### 14. **Peer Review** +- Have your queries reviewed by peers for feedback on efficiency, readability, and adherence to best practices. + +## Conclusion + +Crafting efficient SQL queries is a skill that combines technical knowledge with thoughtful consideration of how each query impacts the database and the application. By adhering to these best practices, developers can ensure their SQL code is not only functional but also efficient, maintainable, and secure. Continuous learning and staying updated with the latest SQL features and optimization techniques are crucial for writing high-quality SQL queries. + +--- + +Creating a syntax guide for SQL queries emphasizes the structure and format of SQL commands, highlighting best practices for clarity and efficiency. This guide will serve as a reference for constructing SQL queries, covering the basic to intermediate syntax for common SQL operations, including selection, insertion, updating, deletion, and complex querying with joins and subqueries. + +# SQL Query Syntax Guide + +## Basic SQL Query Structure + +### SELECT Statement +Retrieve data from one or more tables. +```sql +SELECT column1, column2, ... +FROM tableName +WHERE condition +ORDER BY column1 ASC|DESC; +``` + +### INSERT Statement +Insert new data into a table. +```sql +INSERT INTO tableName (column1, column2, ...) +VALUES (value1, value2, ...); +``` + +### UPDATE Statement +Update existing data in a table. +```sql +UPDATE tableName +SET column1 = value1, column2 = value2, ... +WHERE condition; +``` + +### DELETE Statement +Delete data from a table. +```sql +DELETE FROM tableName +WHERE condition; +``` + +## Joins + +Combine rows from two or more tables based on a related column. + +### INNER JOIN +Select records with matching values in both tables. +```sql +SELECT columns +FROM table1 +INNER JOIN table2 +ON table1.commonColumn = table2.commonColumn; +``` + +### LEFT JOIN (LEFT OUTER JOIN) +Select all records from the left table, and matched records from the right table. +```sql +SELECT columns +FROM table1 +LEFT JOIN table2 +ON table1.commonColumn = table2.commonColumn; +``` + +### RIGHT JOIN (RIGHT OUTER JOIN) +Select all records from the right table, and matched records from the left table. +```sql +SELECT columns +FROM table1 +RIGHT JOIN table2 +ON table1.commonColumn = table2.commonColumn; +``` + +### FULL JOIN (FULL OUTER JOIN) +Select all records when there is a match in either left or right table. +```sql +SELECT columns +FROM table1 +FULL OUTER JOIN table2 +ON table1.commonColumn = table2.commonColumn; +``` + +## Subqueries + +A subquery is a query within another SQL query and embedded within the WHERE clause. +```sql +SELECT column1, column2, ... +FROM tableName +WHERE column1 IN (SELECT column FROM anotherTable WHERE condition); +``` + +## Aggregate Functions + +Used to compute a single result from a set of input values. + +### COUNT +```sql +SELECT COUNT(columnName) +FROM tableName +WHERE condition; +``` + +### MAX +```sql +SELECT MAX(columnName) +FROM tableName +WHERE condition; +``` + +### MIN +```sql +SELECT MIN(columnName) +FROM tableName +WHERE condition; +``` + +### AVG +```sql +SELECT AVG(columnName) +FROM tableName +WHERE condition; +``` + +### SUM +```sql +SELECT SUM(columnName) +FROM tableName +WHERE condition; +``` + +## Grouping Data + +Group rows that have the same values in specified columns into summary rows. + +### GROUP BY +```sql +SELECT column1, AGG_FUNC(column2) +FROM tableName +GROUP BY column1; +``` + +### HAVING +Used with GROUP BY to specify a condition for groups. +```sql +SELECT column1, AGG_FUNC(column2) +FROM tableName +GROUP BY column1 +HAVING AGG_FUNC(column2) > condition; +``` + +## Best Practices for SQL Syntax + +- **Consistency**: Maintain consistent casing for SQL keywords and indentations to enhance readability. +- **Qualify Columns**: Always qualify column names with table names or aliases when using multiple tables. +- **Use Aliases**: For tables and subqueries to make SQL statements more readable. +- **Parameterize Queries**: To prevent SQL injection and ensure queries are safely constructed, especially in applications. + +This syntax guide provides a foundational overview of writing SQL queries, from basic operations to more complex join conditions and subqueries. Adhering to best practices in structuring and formatting your SQL code will make it more readable, maintainable, and secure. + +--- + +For understanding and visualizing database schemas, including generating entity-relationship (ER) diagrams, several open-source tools are available that run on Linux. These tools can help you comprehend table structures, relationships, indexes, and constraints effectively. Here's a guide to some of the most commonly used open-source tools for this purpose: + +## 1. DBeaver + +- **Description**: DBeaver is a universal SQL client and a database administration tool that supports a wide variety of databases. It includes functionalities for database management, editing, and schema visualization, including ER diagrams. +- **Features**: + - Supports many databases (MySQL, PostgreSQL, SQLite, etc.) + - ER diagrams generation + - Data editing and SQL query execution +- **Installation**: Available on Linux through direct download, or package managers like `apt` for Ubuntu, `dnf` for Fedora, or as a snap package. +- **Usage**: To generate ER diagrams, simply connect to your database, navigate to the database or schema, right-click, and select the option to view the diagram. + +## 2. pgModeler + +- **Description**: pgModeler is an open-source tool specifically designed for PostgreSQL. It allows you to model databases via a user-friendly interface and can automatically generate schemas based on your designs. +- **Features**: + - Detailed modeling capabilities + - Export models to SQL scripts + - Reverse engineering of existing databases to create diagrams +- **Installation**: Compiled binaries are available for Linux, or you can build from source. +- **Usage**: Start by creating a new model, then use the tool to add tables, relationships, etc. pgModeler can then generate the SQL code or reverse-engineer the model from an existing database. + +## 3. MySQL Workbench (for MySQL) + +- **Description**: While not exclusively Linux-based or covering all databases, MySQL Workbench is an essential tool for those working with MySQL databases. It provides database design, modeling, and comprehensive administration tools. +- **Features**: + - Visual SQL Development + - Database Migration + - ER diagram creation and management +- **Installation**: Available through the official MySQL website, with support for various Linux distributions. +- **Usage**: Connect to your MySQL database, and use the database modeling tools to create, manage, and visualize ER diagrams. + +## 4. SchemaCrawler + +- **Description**: SchemaCrawler is a command-line tool that allows you to visualize your database schema and generate ER diagrams in a platform-independent manner. It's not a GUI tool, but it's powerful for scripting and integrating into your workflows. +- **Features**: + - Database schema discovery and comprehension + - Ability to generate ER diagrams as HTML or graphical formats + - Works with any JDBC-compliant database +- **Installation**: Available as a downloadable JAR. Requires Java. +- **Usage**: Run SchemaCrawler with the appropriate command-line arguments to connect to your database and specify the output format for your schema visualization. + +## Installing and Using the Tools + +For each tool, you'll typically find installation instructions on the project's website or GitHub repository. In general, the process involves downloading the software package for your Linux distribution, extracting it if necessary, and following any provided installation instructions. + +When using these tools, the first step is always to establish a connection to your database. This usually requires you to input your database credentials and connection details. Once connected, you can explore the features related to schema visualization and ER diagram generation. + +## Conclusion + +Choosing the right tool depends on your specific database system and personal preference regarding GUI versus command-line interfaces. For comprehensive database management and visualization, DBeaver and MySQL Workbench offer extensive features. For PostgreSQL enthusiasts, pgModeler provides a specialized experience, whereas SchemaCrawler is ideal for those who prefer working within a command-line environment and require a tool that supports multiple database systems. \ No newline at end of file diff --git a/tech_docs/music/SoX_guide.md b/tech_docs/music/SoX_guide.md new file mode 100644 index 0000000..f91d11e --- /dev/null +++ b/tech_docs/music/SoX_guide.md @@ -0,0 +1,93 @@ +Creating a complete user guide for SoX involves covering a range of basic use cases to help you get started with this versatile audio processing tool. SoX is highly effective for tasks like format conversion, audio effects application, and general sound manipulation, making it a go-to utility for both beginners and advanced users comfortable with the command line. + +### Installation + +First, ensure SoX is installed on your system. It's available in most Linux distributions' package repositories. + +For Debian-based systems (like Ubuntu), use: +```bash +sudo apt-get install sox +``` + +For Red Hat-based systems, use: +```bash +sudo yum install sox +``` + +### Basic Operations + +#### 1. Converting Audio Formats +SoX can convert audio files between various formats. For example, to convert an MP3 file to a WAV file: +```bash +sox input.mp3 output.wav +``` + +#### 2. Playing Audio Files +SoX can play audio files directly from the command line: +```bash +play filename.mp3 +``` + +#### 3. Recording Audio +To record audio with SoX, use the `rec` command. This example records a 5-second audio clip from the default recording device: +```bash +rec -d 5 myrecording.wav +``` + +### Applying Effects + +#### 1. Changing Volume +To increase or decrease the volume of an audio file, use the `vol` effect: +```bash +sox input.mp3 output.mp3 vol 2dB +``` + +#### 2. Applying Reverb +Add reverb to an audio file with: +```bash +sox input.wav output.wav reverb +``` + +#### 3. Trimming Audio +Trim an audio file to only include a specific portion (e.g., start at 10 seconds and end at 20 seconds): +```bash +sox input.mp3 output.mp3 trim 10 10 +``` + +#### 4. Combining Audio Files +Concatenate two or more audio files into one: +```bash +sox input1.mp3 input2.mp3 output.mp3 +``` + +### Advanced Features + +#### 1. Applying Multiple Effects +You can chain multiple effects in a single command: +```bash +sox input.mp3 output.mp3 reverb vol 2dB trim 0 30 +``` + +#### 2. Noise Reduction +To reduce noise, first capture a noise profile: +```bash +sox noise-audio.wav -n noiseprof noise.prof +``` +Then apply the noise reduction: +```bash +sox input.wav output.wav noisered noise.prof 0.3 +``` + +#### 3. Spectrogram +Generate a spectrogram of an audio file: +```bash +sox input.mp3 -n spectrogram -o output.png +``` + +### Tips and Tricks + +- **Chain Effects**: SoX allows for complex processing chains that combine multiple effects, optimizing the processing flow. +- **Scripting**: Integrate SoX commands into shell scripts for batch processing or automated audio manipulation tasks. +- **Documentation**: For more detailed information on all SoX capabilities and effects, consult the SoX man page or the official SoX documentation by running `man sox` or visiting [SoX - Sound eXchange](http://sox.sourceforge.net/). + +SoX is an exceptionally powerful tool for audio processing, offering a wide range of functionality from basic to advanced audio manipulation and analysis. Experimenting with its various options and effects can help you achieve precisely the audio outcomes you need. \ No newline at end of file diff --git a/tech_docs/music/SuperCollider.md b/tech_docs/music/SuperCollider.md new file mode 100644 index 0000000..400f095 --- /dev/null +++ b/tech_docs/music/SuperCollider.md @@ -0,0 +1,228 @@ +Great choice! SuperCollider is a powerful tool for music production and sound synthesis. Here's a framework you can follow to get started with creating projects in SuperCollider, focusing on beat making, melodies, and other music production functions: + +1. Learn the basics of SuperCollider: + - Familiarize yourself with the SuperCollider environment and its key components: the language (SCLang) and the server (scsynth). + - Understand the basic syntax and structure of SCLang, which is similar to Python in some ways. + - Explore the built-in UGens (Unit Generators) and their functionalities for audio synthesis and processing. + +2. Set up your SuperCollider environment: + - Install SuperCollider on your computer and ensure it runs properly. + - Choose an IDE or text editor for writing SuperCollider code (e.g., the built-in IDE, Atom, or Vim). + - Test your audio output and configure any necessary audio settings. + +3. Learn the fundamentals of sound synthesis: + - Study the different synthesis techniques available in SuperCollider, such as subtractive, additive, FM, and granular synthesis. + - Experiment with creating basic waveforms, envelopes, and filters to shape your sounds. + - Understand the concepts of oscillators, amplitudes, frequencies, and modulation. + +4. Dive into rhythm and beat making: + - Learn how to create rhythmic patterns using SuperCollider's timing and sequencing capabilities. + - Explore the Pbind and Pmono classes for creating patterns and sequences. + - Experiment with different drum synthesis techniques, such as using noise generators, envelopes, and filters to create kick drums, snares, hi-hats, and other percussive sounds. + +5. Explore melody and harmony: + - Learn how to create melodic patterns and sequences using SuperCollider's pitch and scale functions. + - Experiment with different waveforms, envelopes, and effects to create various instrument sounds, such as synths, pads, and leads. + - Understand the concepts of scales, chords, and musical intervals to create harmonically pleasing melodies. + +6. Incorporate effects and processing: + - Explore the wide range of audio effects available in SuperCollider, such as reverb, delay, distortion, and compression. + - Learn how to apply effects to individual sounds or entire mixtures using the SynthDef and Synth classes. + - Experiment with creating custom effects chains and modulating effect parameters in real-time. + +7. Structure and arrange your music: + - Learn how to organize your musical elements into a structured composition using SuperCollider's Patterns and Routines. + - Explore techniques for arranging and transitioning between different sections of your track, such as verse, chorus, and bridge. + - Utilize automation and parameter modulation to add variation and movement to your arrangements. + +8. Experiment, iterate, and refine: + - Practice creating different genres and styles of EDM using SuperCollider. + - Iterate on your patches and compositions, fine-tuning sounds, rhythms, and arrangements. + - Seek feedback from the SuperCollider community, share your creations, and learn from others' techniques and approaches. + +Remember to refer to the SuperCollider documentation, tutorials, and community resources as you progress through your projects. The SuperCollider website (https://supercollider.github.io/) provides extensive documentation, guides, and examples to help you along the way. + +Start with simple projects and gradually increase complexity as you become more comfortable with SuperCollider's concepts and workflow. Don't hesitate to experiment, explore, and have fun while creating your music! + +--- + +Certainly! Let's dive into mastering sound synthesis basics, rhythm and beat production, and crafting melodies and harmonies in SuperCollider. + +**Mastering Sound Synthesis Basics:** + +1. Synthesis Techniques: + - Subtractive Synthesis: This technique starts with a harmonically rich waveform (e.g., sawtooth or square wave) and then filters out certain frequencies to shape the sound. It's often used for creating warm pads, lush strings, and smooth basslines. + Example: `{RLPF.ar(Saw.ar(440), LFNoise1.kr(1).range(200, 5000), 0.1)}.play` + + - FM Synthesis: Frequency Modulation synthesis involves modulating the frequency of one oscillator (carrier) with another oscillator (modulator). FM synthesis is known for creating complex, dynamic, and evolving timbres, such as metallic sounds, bells, and percussive hits. + Example: `{SinOsc.ar(440 + SinOsc.ar(1, 0, 100, 100), 0, 0.5)}.play` + + - Additive Synthesis: This technique combines multiple sine waves at different frequencies and amplitudes to create complex timbres. It's useful for creating rich, harmonically dense sounds like organs, brass, and unique textures. + Example: `{Mix.fill(5, {|i| SinOsc.ar(440 * (i + 1), 0, 1 / (i + 1))})}.play` + +2. Practical Exercise: + - Create a simple sine wave: + `{SinOsc.ar(440, 0, 0.5)}.play` + + - Create a noise burst: + `{WhiteNoise.ar(0.5) * EnvGen.kr(Env.perc(0.01, 0.1), doneAction: 2)}.play` + +**Rhythm and Beat Production:** + +1. Building a Basic Drum Pattern: + - Here's an example of creating a simple drum pattern using `Pbind` and `SynthDef`: + + ```supercollider + SynthDef(\kick, {|amp = 0.5, freq = 60| + var sig = SinOsc.ar(freq, 0, amp) * EnvGen.kr(Env.perc(0.01, 0.5), doneAction: 2); + Out.ar(0, sig ! 2); + }).add; + + SynthDef(\snare, {|amp = 0.5| + var sig = WhiteNoise.ar(amp) * EnvGen.kr(Env.perc(0.01, 0.2), doneAction: 2); + Out.ar(0, sig ! 2); + }).add; + + Pbind( + \instrument, \kick, + \dur, Pseq([1, 1, 1, 1], inf), + \amp, 0.6 + ).play; + + Pbind( + \instrument, \snare, + \dur, Pseq([Rest(1), 1, Rest(1), 1], inf), + \amp, 0.4 + ).play; + ``` + +2. Rhythmic Complexity and Timing: + - Use `Pbind` with `Pseq` and `Prand` to create dynamic and evolving rhythms: + ```supercollider + Pbind( + \instrument, \kick, + \dur, Pseq([1, 0.5, 0.5, Prand([1, 0.5], 1)], inf), + \amp, 0.6 + ).play; + ``` + +**Crafting Melodies and Harmonies:** + +1. Constructing Melodies: + - Use scale and pitch classes to create melodic patterns: + ```supercollider + var scale = Scale.major.degrees; + var melody = Pbind( + \instrument, \synth, + \freq, Pseq(scale.collect({|degree| degree + 60}), inf), + \dur, 0.25, + \amp, 0.4 + ).play; + ``` + +2. Harmony and Chords: + - Generate chords and progressions using chord degrees and intervals: + ```supercollider + var chords = [ + [0, 2, 4], // I chord + [2, 4, 6], // II chord + [4, 6, 8] // III chord + ]; + + var progression = Pbind( + \instrument, \synth, + \freq, Pseq(chords.collect({|chord| chord.collect({|degree| degree + 60})}), inf), + \dur, 2, + \amp, 0.4 + ).play; + ``` + +Remember to experiment, explore, and build upon these examples to create your own unique sounds and compositions in SuperCollider. Happy music-making! + +--- + +Certainly! Here's a guide to producing down tempo music in minor keys using SuperCollider, incorporating the previously discussed mathematical concepts and ratios: + +I. Harmony and Chord Progressions + A. Use the `Scale` class to generate minor scales and chords + 1. `Scale.minor` for natural minor + 2. `Scale.harmonicMinor` for harmonic minor + 3. `Scale.melodicMinor` for melodic minor + B. Utilize `Pseq` and `Prand` to create chord progressions + C. Experiment with `Pswitch` and `Pif` to incorporate chromatic mediants + +II. Rhythm and Tempo + A. Use `TempoClock` to set the tempo between 60-90 BPM + B. Utilize `Pbind` to create rhythmic patterns and polyrhythms + 1. `\dur` for note durations (e.g., `Pseq([1/3, 1/6], inf)` for triplets against eighth notes) + 2. `\stretch` for rhythmic variations (e.g., `Pseq([2/3, 1/3], inf)` for dotted eighth notes against quarter notes) + C. Apply swing using `Pswing` or by manipulating durations + +III. Sound Design and Frequencies + A. Use `SinOsc`, `Saw`, `Pulse`, and other UGens for basic waveforms + B. Apply `RLPF`, `RHPF`, and `BPF` filters to focus on specific frequency ranges + C. Create layered textures using `Splay`, `Mix`, and `Splay` + D. Utilize the golden ratio for amplitude envelopes and modulation depths + +IV. Arrangement and Structure + A. Use the Fibonacci sequence for section lengths and transitions with `Pn`, `Pfin`, and `Pdef` + B. Create tension and release by alternating between sections using `Pseq` and `Ppar` + C. Use the rule of thirds for placing key elements and transitions with `Quant` + +V. Mixing and Mastering + A. Apply `AmpComp` and `FreqShift` to balance frequencies based on equal loudness contours + B. Use `Pan2` and `PanAz` for panning, following the "rule of sixths" + C. Adjust dynamics using `Compander`, `Limiter`, and `Normalizer` + D. Utilize `Meter` and `Loudness` UGens to monitor and control the dynamic range + +VI. Example Code + ```supercollider + ( + // Minor scale and chord progression + ~scale = Scale.minor; + ~chords = ~scale.degrees.collect(_.chord); + ~progression = Pseq([0, 3, 4, 0], inf); + + // Rhythm and tempo + ~tempo = 72; + ~rhythmPattern = Pseq([2/3, 1/3], inf); + + // Sound design and frequencies + ~synthDef = SynthDef(\pad, { + |freq = 440, amp = 0.5, cutoff = 500, rq = 0.5| + var osc1 = Saw.ar(freq); + var osc2 = Pulse.ar(freq * (1 + MouseX.kr(-0.1, 0.1))); + var env = EnvGen.kr(Env.perc(0.01, 1.618), doneAction: 2); + var filter = RLPF.ar(osc1 + osc2, cutoff * env, rq); + Out.ar(0, Pan2.ar(filter * env * amp)); + }).add; + + // Arrangement and structure + ~sections = [ + Pn(Ppar([ + Pbind(\instrument, \pad, \freq, Pseq((~chords[0] + 60).midicps, 1), \dur, 4), + Pbind(\instrument, \pad, \freq, Pseq((~chords[3] + 48).midicps, 1), \dur, 4), + ]), 8), + Pn(Ppar([ + Pbind(\instrument, \pad, \freq, Pseq((~chords[4] + 60).midicps, 1), \dur, 4), + Pbind(\instrument, \pad, \freq, Pseq((~chords[0] + 48).midicps, 1), \dur, 4), + ]), 13), + ]; + + // Mixing and mastering + ~master = { + var sig = In.ar(0, 2); + sig = CompanderD.ar(sig, 0.5, 1, 0.3, 0.01, 0.1); + sig = Limiter.ar(sig, 0.9, 0.01); + sig = Splay.ar(sig); + sig = Loudness.ar(sig); + Out.ar(0, sig * 0.8); + }.play; + + // Play the sections + ~sections[0].play(TempoClock(~tempo / 60)); + ~sections[1].play(TempoClock(~tempo / 60), quant: [8]); + ) + ``` + +Remember to experiment with different UGens, patterns, and parameters to achieve your desired sound. SuperCollider provides a powerful and flexible environment for creating generative and algorithmic music, so don't hesitate to explore and customize the code to suit your needs. \ No newline at end of file diff --git a/tech_docs/networking/CCNA-exam-prep.md b/tech_docs/networking/CCNA-exam-prep.md new file mode 100644 index 0000000..c626b35 --- /dev/null +++ b/tech_docs/networking/CCNA-exam-prep.md @@ -0,0 +1,127 @@ +# CCNA 200-301 Official Cert Guide, Volume 1 Study Reference + +## Introduction +- Overview of CCNA 200-301 +- Study Plan Guidelines + +## Part I: Introduction to Networking +# CCNA 200-301 Official Cert Guide, Volume 1 - Study Reference + +## Part I: Introduction to Networking + +### Chapter 1: Introduction to TCP/IP Networking +- **"Do I Know This Already?" Quiz** +- **Foundation Topics** + - Perspectives on Networking + - TCP/IP Networking Model + - History Leading to TCP/IP + - Overview of the TCP/IP Networking Model + - TCP/IP Application Layer + - HTTP Overview + - HTTP Protocol Mechanisms + - TCP/IP Transport Layer + - TCP Error Recovery Basics + - Same-Layer and Adjacent-Layer Interactions + - TCP/IP Network Layer + - Internet Protocol and the Postal Service + - Internet Protocol Addressing Basics + - IP Routing Basics + - TCP/IP Data-Link and Physical Layers + - Data Encapsulation Terminology + - Names of TCP/IP Messages + - OSI Networking Model and Terminology + - Comparing OSI and TCP/IP Layer Names and Numbers + - OSI Data Encapsulation Terminology +- **Chapter Review** + +### Chapter 2: Fundamentals of Ethernet LANs +- **"Do I Know This Already?" Quiz** +- **Foundation Topics** + - An Overview of LANs + - Typical SOHO LANs + - Typical Enterprise LANs + - The Variety of Ethernet Physical Layer Standards + - Consistent Behavior over All Links Using the Ethernet Data-Link Layer + - Building Physical Ethernet LANs with UTP + - Transmitting Data Using Twisted Pairs + - Breaking Down a UTP Ethernet Link + - UTP Cabling Pinouts for 10BASE-T and 100BASE-T + - Straight-Through Cable Pinout + - Choosing the Right Cable Pinouts + - UTP Cabling Pinouts for 1000BASE-T + - Building Physical Ethernet LANs with Fiber + - Fiber Cabling Transmission Concepts + - Using Fiber with Ethernet + - Sending Data in Ethernet Networks + - Ethernet Data-Link Protocols + - Ethernet Addressing + - Identifying Network Layer Protocols with the Ethernet Type Field + - Error Detection with FCS + - Sending Ethernet Frames with Switches and Hubs + - Sending in Modern Ethernet LANs Using Full Duplex + - Using Half Duplex with LAN Hubs +- **Chapter Review** + +### Chapter 3: Fundamentals of WANs and IP Routing +- **Part I Review** + +## Part II: Implementing Ethernet LANs +### Chapter 4: Using the Command-Line Interface +### Chapter 5: Analyzing Ethernet LAN Switching +### Chapter 6: Configuring Basic Switch Management +### Chapter 7: Configuring and Verifying Switch Interfaces +- **Part II Review** + +## Part III: Implementing VLANs and STP +### Chapter 8: Implementing Ethernet Virtual LANs +### Chapter 9: Spanning Tree Protocol Concepts +### Chapter 10: RSTP and EtherChannel Configuration +- **Part III Review** + +## Part IV: IPv4 Addressing +### Chapter 11: Perspectives on IPv4 Subnetting +### Chapter 12: Analyzing Classful IPv4 Networks +### Chapter 13: Analyzing Subnet Masks +### Chapter 14: Analyzing Existing Subnets +- **Part IV Review** + +## Part V: IPv4 Routing +### Chapter 15: Operating Cisco Routers +### Chapter 16: Configuring IPv4 Addresses and Static Routes +### Chapter 17: IP Routing in the LAN +### Chapter 18: Troubleshooting IPv4 Routing +- **Part V Review** + +## Part VI: OSPF +### Chapter 19: Understanding OSPF Concepts +### Chapter 20: Implementing OSPF +### Chapter 21: OSPF Network Types and Neighbors +- **Part VI Review** + +## Part VII: IP Version 6 +### Chapter 22: Fundamentals of IP Version 6 +### Chapter 23: IPv6 Addressing and Subnetting +### Chapter 24: Implementing IPv6 Addressing on Routers +### Chapter 25: Implementing IPv6 Routing +- **Part VII Review** + +## Part VIII: Wireless LANs +### Chapter 26: Fundamentals of Wireless Networks +### Chapter 27: Analyzing Cisco Wireless Architectures +### Chapter 28: Securing Wireless Networks +### Chapter 29: Building a Wireless LAN +- **Part VIII Review** + +## Part IX: Appendixes and Online Resources +- Appendix A: Numeric Reference Tables +- Appendix B: CCNA 200-301, Volume 1 Exam Updates +- Appendix C: Answers to Quizzes +- Glossary +- Index +- Online Appendixes (D to R) for additional practice and topics + +### **Study Tips** +- Regularly review each part and complete associated quizzes. +- Utilize online appendixes for practical exercises. +- Follow the study planner for systematic progress. +- Engage with study aids like the glossary and index for quick references. \ No newline at end of file diff --git a/tech_docs/networking/NordVPN.md b/tech_docs/networking/NordVPN.md new file mode 100644 index 0000000..7e94c0a --- /dev/null +++ b/tech_docs/networking/NordVPN.md @@ -0,0 +1,197 @@ +Absolutely, let’s streamline the steps to set up NordVPN on a fresh OpenWrt device using CLI commands. This guide assumes you have basic knowledge of how to access your router via SSH and that OpenWrt is already installed on your device. + +### Step 1: Access Your Router +Connect to your router via SSH: +```bash +ssh root@192.168.1.1 +``` +Replace `192.168.1.1` with your router's IP address if it has been changed from the default. + +### Step 2: Update and Install Necessary Packages +Update the package manager and install OpenVPN and the necessary IP utilities: +```bash +opkg update +opkg install openvpn-openssl ip-full +``` + +### Step 3: Download and Set Up NordVPN Configuration Files +Choose a NordVPN server that you want to connect to and download its OpenVPN UDP configuration. You can find server configurations on the NordVPN website. + +1. **Download a server config file directly to your router**: + Replace `SERVERNAME` with your chosen server's name. + ```bash + wget -P /etc/openvpn https://downloads.nordcdn.com/configs/files/ovpn_udp/servers/SERVERNAME.udp.ovpn + ``` + +2. **Rename the downloaded configuration file for easier management**: + ```bash + mv /etc/openvpn/SERVERNAME.udp.ovpn /etc/openvpn/nordvpn.ovpn + ``` + +### Step 4: Configure VPN Credentials +NordVPN requires authentication with your service credentials. + +1. **Create a credentials file**: + Open a new file using `nano`: + ```bash + nano /etc/openvpn/credentials + ``` + Enter your NordVPN username and password, each on a separate line. Save and close the editor. + +2. **Modify the NordVPN configuration file to use the credentials file**: + ```bash + sed -i 's/auth-user-pass/auth-user-pass \/etc\/openvpn\/credentials/' /etc/openvpn/nordvpn.ovpn + ``` + +### Step 5: Enable and Start OpenVPN +1. **Automatically start OpenVPN with the NordVPN configuration on boot**: + ```bash + echo 'openvpn --config /etc/openvpn/nordvpn.ovpn &' >> /etc/rc.local + ``` + +2. **Start OpenVPN manually for the first time**: + ```bash + /etc/init.d/openvpn start + ``` + +### Step 6: Configure Network and Firewall +Ensure the VPN traffic is properly routed and the firewall is configured to allow it. + +1. **Edit the network configuration**: + Add a new interface for the VPN: + ```bash + uci set network.vpn0=interface + uci set network.vpn0.ifname='tun0' + uci set network.vpn0.proto='none' + uci commit network + ``` + +2. **Set up the firewall to allow traffic from LAN to the VPN**: + ```bash + uci add firewall zone + uci set firewall.@zone[-1].name='vpn' + uci set firewall.@zone[-1].network='vpn0' + uci set firewall.@zone[-1].input='REJECT' + uci set firewall.@zone[-1].output='ACCEPT' + uci set firewall.@zone[-1].forward='REJECT' + uci set firewall.@zone[-1].masq='1' + uci commit firewall + uci add firewall forwarding + uci set firewall.@forwarding[-1].src='lan' + uci set firewall.@forwarding[-1].dest='vpn' + uci commit firewall + ``` + +3. **Restart the firewall to apply changes**: + ```bash + /etc/init.d/firewall restart + ``` + +### Step 7: Test the Connection +Check if the VPN connection is active and working: +```bash +ping -c 4 google.com +``` + +You should now be connected to NordVPN through your OpenWrt router using the configured OpenVPN setup. This streamlined guide ensures you have a clear path through the configuration process with easy-to-follow CLI commands. + +--- + +The CLI instructions you're interested in offer a more hands-on approach to setting up NordVPN on an OpenWrt router. This method is ideal if you're comfortable using the command line and want more control over the VPN configuration. Here's a simplified version of the process, broken down into manageable steps: + +### 1. Access Router via SSH +Connect to your OpenWrt router using SSH. The default IP is usually `192.168.1.1` unless you have changed it. The default username is `root`. + +### 2. Install Necessary Packages +Update your package list and install the required OpenVPN packages: +```bash +opkg update +opkg install openvpn-openssl ip-full luci-app-openvpn +``` +(Optional) Install `nano` for easier file editing: +```bash +opkg install nano +``` + +### 3. Download OpenVPN Configuration +Use NordVPN's server recommendation tool to find the best server and download its configuration file directly to your router: +```bash +wget -P /etc/openvpn https://downloads.nordcdn.com/configs/files/ovpn_udp/servers/[server-name].udp.ovpn +``` +Replace `[server-name]` with the actual server name, such as `uk2054.nordvpn.com`. + +### 4. Configure OpenVPN +Edit the downloaded .ovpn file to include your NordVPN credentials: +```bash +nano /etc/openvpn/[server-name].udp.ovpn +``` +Modify the `auth-user-pass` line to point to a credentials file: +```plaintext +auth-user-pass /etc/openvpn/credentials +``` +Create the credentials file: +```bash +echo "YourUsername" > /etc/openvpn/credentials +echo "YourPassword" >> /etc/openvpn/credentials +chmod 600 /etc/openvpn/credentials +``` + +### 5. Enable OpenVPN to Start on Boot +Ensure OpenVPN starts automatically with your router: +```bash +/etc/init.d/openvpn enable +``` + +### 6. Set Up Networking and Firewall +Create a new network interface for the VPN and configure the firewall to route traffic through the VPN: + +**Network Interface Configuration:** +```bash +uci set network.nordvpntun=interface +uci set network.nordvpntun.proto='none' +uci set network.nordvpntun.ifname='tun0' +uci commit network +``` + +**Firewall Configuration:** +```bash +uci add firewall zone +uci set firewall.@zone[-1].name='vpnfirewall' +uci set firewall.@zone[-1].input='REJECT' +uci set firewall.@zone[-1].output='ACCEPT' +uci set firewall.@zone[-1].forward='REJECT' +uci set firewall.@zone[-1].masq='1' +uci set firewall.@zone[-1].mtu_fix='1' +uci add_list firewall.@zone[-1].network='nordvpntun' +uci add firewall forwarding +uci set firewall.@forwarding[-1].src='lan' +uci set firewall.@forwarding[-1].dest='vpnfirewall' +uci commit firewall +``` + +### 7. Configure DNS +Change DNS settings to use NordVPN DNS or another preferred DNS service: +```bash +uci set network.wan.peerdns='0' +uci del network.wan.dns +uci add_list network.wan.dns='103.86.96.100' +uci add_list network.wan.dns='103.86.99.100' +uci commit +``` + +### 8. Prevent Traffic Leakage (Optional) +To enhance security, add custom rules to block all traffic if the VPN disconnects: +```bash +echo "if (! ip a s tun0 up) && (! iptables -C forwarding_rule -j REJECT); then iptables -I forwarding_rule -j REJECT; fi" >> /etc/firewall.user +``` + +### 9. Start the VPN +Start the OpenVPN service and verify it's running properly: +```bash +/etc/init.d/openvpn start +``` + +### 10. Check Connection Status +Visit NordVPN's homepage or another site like `ipinfo.io` to check your IP address and ensure your traffic is routed through the VPN. + +This setup should give you a robust and secure VPN connection on your OpenWrt router using NordVPN. If you encounter any issues, you may need to review the configuration steps or consult NordVPN's support for further troubleshooting. \ No newline at end of file diff --git a/tech_docs/networking/OpenWrt.md b/tech_docs/networking/OpenWrt.md new file mode 100644 index 0000000..59bc0d7 --- /dev/null +++ b/tech_docs/networking/OpenWrt.md @@ -0,0 +1,229 @@ +```bash +pct create 100 /var/lib/vz/template/cache/openwrt-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype unmanaged --hostname openwrt-0 \ +--password fuzzy817 --tag network --storage local-lvm --memory 256 --swap 128 \ +--rootfs local-lvm:1,size=512M --net0 name=eth0,bridge=vmbr0,firewall=1 \ +--net1 name=eth1,bridge=vmbr1,firewall=1 --cores 1 --cpuunits 500 --onboot 1 --debug 0 +``` + +```bash +pct start 100 +``` + +```bash +pct create 110 /var/lib/vz/template/cache/kali-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype debian --hostname kali-0 \ +--password fuzzy817 --tag tools --storage zfs-disk0 --cores 2 \ +--memory 2048 --swap 1024 --rootfs local-lvm:1,size=64G \ +--net0 name=eth0,bridge=vmbr0,firewall=1 --cpuunits 1500 --onboot 1 \ +--debug 0 --features nesting=1,keyctl=1 +``` +```bash +pct start 110 +``` + +```bash +pct create 120 /var/lib/vz/template/cache/alpine-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype alpine --hostname alpine-0 \ +--password fuzzy817 --tag docker --storage local-lvm --cores 2 \ +--memory 1024 --swap 256 --rootfs local-lvm:1,size=8G \ +--net0 name=eth0,bridge=vmbr0,firewall=1 --cpuunits 1000 --onboot 1 \ +--debug 0 --features nesting=1,keyctl=1 +``` +```bash +pct start 120 +``` + +--- + +# Proxmox Container Setup Guide + +## Introduction +This guide provides detailed instructions for configuring OpenWRT, Alpine Linux, and Kali Linux containers on a Proxmox VE environment. Each section covers the creation, configuration, and basic setup steps necessary to get each type of container up and running, tailored for use in a lab setting. + +## Links +- [Split A GPU Between Multiple Computers - Proxmox LXC (Unprivileged)](https://youtu.be/0ZDr5h52OOE?si=F4RVd5mA5IRjrpXU) +- [Must-Have OpenWrt Router Setup For Your Proxmox](https://youtu.be/3mPbrunpjpk?si=WofNEJUZL4FAw7HP) +- [Docker on Proxmox LXC 🚀 Zero Bloat and Pure Performance!](https://youtu.be/-ZSQdJ62r-Q?si=GCXOEsKnOdm6OIiz) + +## Prerequisites +- Proxmox VE installed on your server +- Access to Proxmox web interface or command-line interface +- Container templates downloaded (OpenWRT, Alpine, Kali Linux) + +## Container Configuration +### OpenWRT Container Setup +#### Description +This section details setting up an OpenWRT container designed for network routing and firewall tasks. + +#### Create and Configure the OpenWRT Container +```bash +pct create 100 /var/lib/vz/template/cache/openwrt-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype unmanaged --hostname openwrt-0 \ +--password --tag network --storage local-lvm --memory 256 --swap 128 \ +--rootfs local-lvm:1,size=512M --net0 name=eth0,bridge=vmbr0,firewall=1 \ +--net1 name=eth1,bridge=vmbr1,firewall=1 --cores 1 --cpuunits 500 --onboot 1 --debug 0 +``` + +#### Start the Container and Access the Console +```bash +pct start 100 +pct console 100 +``` + +#### Update and Install Packages +```bash +opkg update +opkg install qemu-ga +reboot +``` + +#### Network and Firewall Configuration +Configure network settings and firewall rules: +```bash +vi /etc/config/network +/etc/init.d/network restart + +vi /etc/config/firewall +/etc/init.d/firewall restart + +# Setting up firewall rules using UCI +uci add firewall rule +uci set firewall.@rule[-1].name='Allow-SSH' +uci set firewall.@rule[-1].src='wan' +uci set firewall.@rule[-1].proto='tcp' +uci set firewall.@rule[-1].dest_port='22' +uci set firewall.@rule[-1].target='ACCEPT' + +uci add firewall rule +uci set firewall.@rule[-1].name='Allow-HTTPS' +uci set firewall.@rule[-1].src='wan' +uci set firewall.@rule[-1].proto='tcp' +uci set firewall.@rule[-1].dest_port='443' +uci set firewall.@rule[-1].target='ACCEPT' + +uci add firewall rule +uci set firewall.@rule[-1].name='Allow-HTTP' +uci set firewall.@rule[-1].src='wan' +uci set firewall.@rule[-1].proto='tcp' +uci set firewall.@rule[-1].dest_port='80' +uci set firewall.@rule[-1].target='ACCEPT' + +uci commit firewall +/etc/init.d/firewall restart +``` + +### Alpine Container Setup +#### Description +Set up an Alpine Linux container optimized for running Docker, ensuring lightweight deployment and management of Docker applications. + +#### Create and Configure the Alpine Container +```bash +pct create 120 /var/lib/vz/template/cache/alpine-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype alpine --hostname alpine-0 \ +--password --tag docker --storage local-lvm --cores 2 \ +--memory 1024 --swap 256 --rootfs local-lvm:1,size=8G \ +--net0 name=eth0,bridge=vmbr0,firewall=1 --keyctl 1 --nesting 1 \ +--cpuunits 1000 --onboot 1 --debug 0 +``` + +#### Enter the Container +```bash +pct enter 120 +``` + +#### System Update and Package Installation +Enable community repositories and install essential packages: +```bash +sed -i '/^#.*community/s/^#//' /etc/apk/repositories +apk update && apk upgrade +apk add qemu-guest-agent docker openssh sudo +``` + +#### Start and Enable Docker Service +```bash +rc-service docker start +rc-update add docker default +``` + +#### Configure Network +Set up network interfaces and restart networking services: +```bash +setup-interfaces +service networking restart +``` + +#### Configure and Start SSH Service +```bash +rc-update add sshd +service sshd start +vi /etc/ssh/sshd_config +service sshd restart +``` + +#### Create a System User and Add to Docker Group and Sudoers +```bash +adduser -s /bin/ash medusa +addgroup medusa docker +echo "medusa ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/medusa +``` + +#### Test Docker Installation +```bash +docker run hello-world +``` + +```bash +docker volume create portainer_data +``` + +```bash +docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest +``` + +```markdown +[Portainer Dashboard](https://localhost:9443) +``` + +### Kali Linux Container Setup +#### Description +Configure a Kali Linux container tailored for security testing and penetration testing tools. + +#### Create and Configure the Kali Linux Container +```bash +pct create 110 /var/lib/vz/template/cache/kali-default-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype debian --hostname kali-0 \ +--password --tag tools --storage local-lvm --cores 2 \ +--memory 2048 --swap 1024 --rootfs local-lvm:1,size=10G \ +--net0 name=eth0,bridge=vmbr0,firewall=1 --cpuunits 1500 --onboot 1 \ +--debug 0 --features nesting=1,keyctl=1 +``` + +## Conclusion +Follow these steps to successfully set up and configure OpenWRT, Alpine, and Kali Linux containers on Proxmox. Adjust configurations according to your specific needs and ensure all passwords are secure before deploying containers in a production environment. + +```bash +pct create 100 /var/lib/vz/template/cache/openwrt-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype unmanaged --hostname openwrt-0 \ +--password --tag network --storage local-lvm --memory 256 --swap 128 \ +--rootfs local-lvm:1,size=512M --net0 name=eth0,bridge=vmbr0,firewall=1 \ +--net1 name=eth1,bridge=vmbr1,firewall=1 --cores 1 --cpuunits 500 --onboot 1 --debug 0 +``` + +```bash +pct create 110 /var/lib/vz/template/cache/kali-default-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype debian --hostname kali-0 \ +--password --tag tools --storage local-lvm --cores 2 \ +--memory 2048 --swap 1024 --rootfs local-lvm:1,size=10G \ +--net0 name=eth0,bridge=vmbr0,firewall=1 --cpuunits 1500 --onboot 1 \ +--debug 0 --features nesting=1,keyctl=1 +``` + +```bash +pct create 120 /var/lib/vz/template/cache/alpine-rootfs.tar.xz \ +--unprivileged 1 --arch amd64 --ostype alpine --hostname alpine-0 \ +--password --tag docker --storage local-lvm --cores 2 \ +--memory 1024 --swap 256 --rootfs local-lvm:1,size=8G \ +--net0 name=eth0,bridge=vmbr0,firewall=1 --keyctl 1 --nesting 1 \ +--cpuunits 1000 --onboot 1 --debug 0 +``` \ No newline at end of file diff --git a/tech_docs/networking/SOAR_lab.md b/tech_docs/networking/SOAR_lab.md new file mode 100644 index 0000000..13055c7 --- /dev/null +++ b/tech_docs/networking/SOAR_lab.md @@ -0,0 +1,99 @@ +Creating a security operations environment with Wazuh and integrating Shuffle SOAR can greatly enhance your ability to monitor, analyze, and respond to threats in real time. Here's a consolidated reference guide to get you started, detailing the components needed, benefits, and areas of focus relevant today and into the future. + +### Getting Started with Wazuh + +**Installation and Configuration:** +- **Wazuh Server Setup:** Begin by installing the Wazuh server, which involves adding the Wazuh repository to your system, installing the Wazuh manager, and configuring Filebeat for log forwarding【5†source】. +- **Component Overview:** Wazuh consists of a universal agent, Wazuh server (manager), Wazuh indexer, and Wazuh dashboard for visualizing the data【6†source】【7†source】. + +### Integrating Shuffle SOAR + +**Setup and Integration:** +- **Configuring Wazuh for Shuffle:** Configure Wazuh to forward alerts in JSON format to Shuffle by setting up an integration block in the `ossec.conf` file of the Wazuh manager【13†source】【14†source】. +- **Creating Workflows in Shuffle:** Use Shuffle to create workflows that will process the Wazuh alerts. You can automate various security operations based on the type of alerts received, such as disabling a user account in response to detected threats【13†source】. + +### Key Components and Benefits + +- **Unified Security Monitoring:** Wazuh provides a comprehensive platform for threat detection, incident response, and compliance monitoring across your environment. +- **Automation and Response:** Shuffle SOAR enables the automation of security operations, reducing response times to threats and freeing up resources for other critical tasks. +- **Flexibility and Scalability:** Both Wazuh and Shuffle are designed to be scalable and flexible, allowing for customization according to specific organizational needs. + +### Areas of Focus + +1. **Threat Detection and Response:** Leveraging Wazuh's detection capabilities with Shuffle's automated workflows can significantly improve the efficiency of threat detection and response mechanisms. +2. **Compliance and Auditing:** Wazuh's comprehensive monitoring and logging capabilities are invaluable for meeting compliance requirements and conducting audits. +3. **Security Orchestration:** The integration of SOAR tools like Shuffle into security operations centers (SOCs) is becoming increasingly important for orchestrating responses to security incidents. +4. **Cloud Security:** With the shift towards cloud environments, focusing on cloud-specific security challenges and integrating cloud-native tools into your security stack is crucial. + +### Looking Ahead + +- **Machine Learning and AI:** Incorporating machine learning and AI for anomaly detection and predictive analytics will become more prevalent, offering advanced threat detection capabilities. +- **Zero Trust Architecture:** Implementing Zero Trust principles, supported by continuous monitoring and verification from solutions like Wazuh, will be critical for securing modern networks. +- **Enhanced Automation:** The future lies in further automating security responses and operational tasks, reducing the time from threat detection to resolution. + +### Conclusion + +By integrating Wazuh with Shuffle SOAR, organizations can create a robust security operations framework capable of addressing modern security challenges. This guide serves as a starting point for building and enhancing your security posture with these powerful tools. As you implement and scale your operations, keep abreast of emerging technologies and security practices to ensure your environment remains secure and resilient against evolving threats. + + +--- + +Given the topics covered, here are several labs and learning experiences designed to enhance your skills with Wazuh and Shuffle SOAR, particularly within a virtualized environment using KVM and isolated bridge networks. These exercises aim to provide hands-on experience, from basic setups to more advanced integrations and security practices. + +### Lab 1: Basic Wazuh Server and Agent Setup + +**Objective:** Install and configure a basic Wazuh server and agent setup within a KVM virtualized environment. + +**Tasks:** +1. Create a VM for the Wazuh server on KVM, ensuring it is connected to an isolated bridge network. +2. Install the Wazuh server on this VM, following the [official documentation](https://documentation.wazuh.com/current/installation-guide/wazuh-server/index.html). +3. Create another VM for the Wazuh agent, connected to the same isolated bridge network. +4. Install the Wazuh agent and register it with the Wazuh server. + +**Learning Outcome:** Understand the process of setting up Wazuh in a virtualized environment and the basic communication between server and agent. + +### Lab 2: Advanced Wazuh Features Exploration + +**Objective:** Explore advanced features of Wazuh, such as rule writing, log analysis, and file integrity monitoring. + +**Tasks:** +1. Write custom detection rules for simulated threats (e.g., unauthorized SSH login attempts). +2. Configure and test file integrity monitoring on the agent VM. +3. Use the Wazuh Kibana app to analyze logs and alerts generated by the agent. + +**Learning Outcome:** Gain hands-on experience with Wazuh's advanced capabilities for threat detection and response. + +### Lab 3: Integrating Wazuh with Shuffle SOAR + +**Objective:** Integrate Wazuh with Shuffle SOAR to automate responses to specific alerts. + +**Tasks:** +1. Set up a basic Shuffle workflow that responds to a common threat detected by Wazuh (e.g., disabling a compromised user account). +2. Configure Wazuh to forward alerts to Shuffle using webhooks. +3. Simulate a threat that triggers the Wazuh alert and observe the automated response from Shuffle. + +**Learning Outcome:** Learn how to automate security operations by integrating Wazuh with a SOAR platform. + +### Lab 4: Security Hardening and Monitoring of Wazuh Environment + +**Objective:** Apply security best practices to harden the Wazuh environment and set up monitoring. + +**Tasks:** +1. Implement SSH key-based authentication for VMs. +2. Configure firewall rules to restrict access to the Wazuh server. +3. Set up monitoring for the Wazuh server using tools like Grafana to visualize logs and performance metrics. + +**Learning Outcome:** Understand the importance of security hardening and continuous monitoring in a security operations environment. + +### Lab 5: Cloud Integration and Elastic Stack + +**Objective:** Explore the integration of Wazuh with cloud services and Elastic Stack for enhanced log analysis and visualization. + +**Tasks:** +1. Configure Wazuh to monitor a cloud service (e.g., AWS S3 bucket for access logs). +2. Set up Elastic Stack (Elasticsearch, Logstash, Kibana) and integrate it with Wazuh for advanced log analysis. +3. Create dashboards in Kibana to visualize and analyze data from cloud services. + +**Learning Outcome:** Gain insights into how Wazuh can be used for monitoring cloud environments and the integration with Elastic Stack for log management. + +These labs offer a comprehensive learning path from basic setup to advanced usage and integration of Wazuh in a secure, virtualized environment. Working through these exercises will build a solid foundation in security monitoring, threat detection, and automated response strategies. \ No newline at end of file diff --git a/tech_docs/networking/cybersecurity_getting_started.md b/tech_docs/networking/cybersecurity_getting_started.md new file mode 100644 index 0000000..15ba77b --- /dev/null +++ b/tech_docs/networking/cybersecurity_getting_started.md @@ -0,0 +1,230 @@ +# Building a Cybersecurity Lab with Docker and Active Directory Integration + +## Introduction +This guide provides a comprehensive walkthrough for creating an advanced cybersecurity lab environment using Docker and Docker Compose, integrated with a `homelab.local` Active Directory domain. The lab is designed to offer a flexible, scalable, and easily manageable platform for cybersecurity professionals and enthusiasts to practice, experiment, and enhance their skills in various security domains. + +## Lab Architecture +The lab architecture consists of the following key components: +1. **Learning Paths**: The lab is organized into distinct learning paths, each focusing on a specific cybersecurity domain, such as network security, web application security, incident response, and malware analysis. This structure enables targeted skill development and focused experimentation. + +2. **Docker Containers**: Each learning path is implemented using Docker containers, providing isolated and reproducible environments for different security scenarios and tools. Containers ensure efficient resource utilization and ease of management. + +3. **Docker Compose**: Docker Compose is employed for orchestrating and managing the containers within each learning path. It allows for defining and configuring multiple services, networks, and volumes, simplifying the deployment and management of complex security environments. + +4. **Active Directory Integration**: The lab is integrated with a `homelab.local` Active Directory domain, enabling centralized user and resource management. This integration provides a realistic enterprise network simulation and allows for practicing security scenarios in a controlled Active Directory environment. + +```mermaid +graph TD + A[Host Machine] --> B[Docker] + B --> C[Network Security] + B --> D[Web Application Security] + B --> E[Incident Response and Forensics] + B --> F[Malware Analysis] + + G[homelab.local] --> H[Active Directory Integration] + H --> B +``` + +## Lab Setup +To set up the cybersecurity lab, follow these step-by-step instructions: + +### Prerequisites +- A host machine or dedicated server with sufficient resources (CPU, RAM, storage) to run multiple Docker containers. +- Docker and Docker Compose installed on the host machine. +- Access to the `homelab.local` Active Directory domain and its resources. + +### Step 1: Active Directory Integration +1. Ensure that the `homelab.local` Active Directory domain is properly set up and accessible from the host machine. +2. Create the necessary user accounts, security groups, and organizational units (OUs) within the Active Directory domain to mirror a realistic enterprise environment. + +### Step 2: Docker and Docker Compose Setup +1. Install Docker and Docker Compose on the host machine following the official documentation for your operating system. +2. Verify the successful installation by running `docker --version` and `docker-compose --version` in the terminal. + +### Step 3: Learning Paths Structure +1. Create a dedicated directory for each learning path on the host machine, such as `network-security`, `web-app-security`, `incident-response`, and `malware-analysis`. +2. Within each learning path directory, create a `Dockerfile` that defines the container environment, including the necessary tools, dependencies, and configurations specific to that learning path. +3. Create a `docker-compose.yml` file in each learning path directory to define the services, networks, and volumes required for that specific path. + +### Step 4: Configuration and Deployment +1. Customize the `Dockerfile` for each learning path, specifying the base image, installing required packages, and configuring the environment variables and settings. +2. Modify the `docker-compose.yml` file for each learning path, defining the services, networks, and volumes necessary for the specific security scenario or tool. +3. Use Docker Compose to build and deploy the containers for each learning path by running `docker-compose up -d` in the respective directory. + +### Step 5: Central Management +1. Create a central `docker-compose.yml` file at the root level of the lab directory to manage and orchestrate all the learning path containers collectively. +2. Consider using tools like Portainer or Rancher for a web-based GUI to manage and monitor the Docker containers, networks, and volumes across the entire lab. + +## Cybersecurity Learning Paths +The lab provides the following learning paths to cover various aspects of cybersecurity: + +### 1. Network Security +- **Packet Analysis**: Utilize tools like Wireshark and tcpdump to capture and analyze network traffic, identify anomalies, and detect potential security threats. +- **Firewall Configuration**: Configure and manage firewalls using tools like iptables and pfsense to control network traffic, implement access controls, and enforce security policies. +- **Intrusion Detection and Prevention**: Deploy and configure intrusion detection systems (IDS) and intrusion prevention systems (IPS) using tools like Snort and Suricata to monitor network traffic and detect and prevent malicious activities. +- **VPN and Secure Communication**: Set up and configure virtual private networks (VPNs) using OpenVPN or WireGuard to establish secure communication channels between different network segments and remote locations. + +```mermaid +graph LR + A[Network Security] --> B[Packet Analysis] + A --> C[Firewall Configuration] + A --> D[Intrusion Detection and Prevention] + A --> E[VPN and Secure Communication] + + B --> B1[Wireshark] + B --> B2[tcpdump] + + C --> C1[iptables] + C --> C2[pfsense] + + D --> D1[Snort] + D --> D2[Suricata] + + E --> E1[OpenVPN] + E --> E2[WireGuard] +``` + +### 2. Web Application Security +- **Vulnerability Assessment**: Perform web application vulnerability scanning and assessment using tools like OWASP ZAP, Burp Suite, and Nikto to identify common web vulnerabilities such as SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). +- **Penetration Testing**: Conduct in-depth penetration testing on web applications using tools and frameworks like Metasploit, sqlmap, and BeEF to identify and exploit vulnerabilities, and assess the application's resilience to attacks. +- **Web Application Firewall (WAF)**: Configure and deploy WAFs using tools like ModSecurity and NAXSI to protect web applications from common attacks, enforce security rules, and monitor web traffic for suspicious activities. +- **API Security**: Test and secure RESTful APIs using tools like Postman and Swagger to validate API functionality, authentication, authorization, and input validation. + +```mermaid +graph LR + A[Web Application Security] --> B[Vulnerability Assessment] + A --> C[Penetration Testing] + A --> D[Web Application Firewall] + A --> E[API Security] + + B --> B1[OWASP ZAP] + B --> B2[Burp Suite] + B --> B3[Nikto] + + C --> C1[Metasploit] + C --> C2[sqlmap] + C --> C3[BeEF] + + D --> D1[ModSecurity] + D --> D2[NAXSI] + + E --> E1[Postman] + E --> E2[Swagger] +``` + + +### 3. Incident Response and Forensics +- **Incident Response Planning**: Develop and practice incident response procedures using the lab environment to simulate security incidents, test incident response plans, and improve incident handling capabilities. +- **Log Analysis**: Collect and analyze system and application logs using tools like ELK stack (Elasticsearch, Logstash, Kibana) and Splunk to identify security events, detect anomalies, and investigate incidents. +- **Memory Forensics**: Perform memory forensics on compromised systems using tools like Volatility and Rekall to analyze memory dumps, identify malicious processes, and extract valuable artifacts for incident investigation. +- **Network Forensics**: Conduct network forensics using tools like NetworkMiner and Xplico to analyze network traffic captures (PCAP files), reconstruct network events, and investigate network-based attacks. +```mermaid +graph LR + A[Incident Response and Forensics] --> B[Incident Response Planning] + A --> C[Log Analysis] + A --> D[Memory Forensics] + A --> E[Network Forensics] + + C --> C1[ELK Stack] + C --> C2[Splunk] + + D --> D1[Volatility] + D --> D2[Rekall] + + E --> E1[NetworkMiner] + E --> E2[Xplico] +``` + +### 4. Malware Analysis +- **Static Analysis**: Perform static analysis on malware samples using tools like IDA Pro, Ghidra, and Radare2 to analyze malware code, identify suspicious functions, and understand the malware's behavior without executing it. +- **Dynamic Analysis**: Execute malware samples in isolated containers using tools like Cuckoo Sandbox and REMnux to observe the malware's behavior, analyze its interactions with the system and network, and identify its functionality and persistence mechanisms. +- **Reverse Engineering**: Apply reverse engineering techniques using tools like x64dbg and OllyDbg to disassemble and debug malware binaries, understand their internal workings, and identify obfuscation or anti-analysis techniques. +- **Malware Dissection**: Dissect and analyze different types of malware, such as ransomware, trojans, and botnets, to understand their infection vectors, command and control (C2) communication, and impact on infected systems. +```mermaid +graph LR + A[Malware Analysis] --> B[Static Analysis] + A --> C[Dynamic Analysis] + A --> D[Reverse Engineering] + A --> E[Malware Dissection] + + B --> B1[IDA Pro] + B --> B2[Ghidra] + B --> B3[Radare2] + + C --> C1[Cuckoo Sandbox] + C --> C2[REMnux] + + D --> D1[x64dbg] + D --> D2[OllyDbg] +``` +## Example Scenarios +To demonstrate the practical applications of the cybersecurity lab, consider the following example scenarios: + +### Scenario 1: Ransomware Attack Simulation +Objective: Simulate a ransomware attack and practice incident response procedures. + +Steps: +1. Set up a vulnerable Windows server container in the lab environment. +2. Create a simulated user environment with sample files and documents. +3. Deploy a controlled ransomware sample or a ransomware simulator within the container. +4. Monitor the network traffic and analyze the ransomware's behavior using tools like Wireshark and Snort. +5. Implement containment measures, such as isolating the infected container and blocking malicious traffic. +6. Perform memory forensics on the affected system to identify the encryption process and extract relevant artifacts. +7. Develop and test a recovery plan, including data restoration from backups and system hardening measures. +```mermaid +graph LR + A[Vulnerable Windows Server Container] --> B[Deploy Ransomware] + B --> C[Monitor Network Traffic] + C --> D[Implement Containment Measures] + D --> E[Perform Memory Forensics] + E --> F[Develop Recovery Plan] + F --> G[Restore Data and Harden System] +``` +### Scenario 2: Web Application Penetration Testing +Objective: Conduct a penetration test on a vulnerable web application to identify and exploit vulnerabilities. + +Steps: +1. Deploy a purposefully vulnerable web application, such as OWASP Juice Shop or DVWA, in a container. +2. Perform reconnaissance to gather information about the application's functionality and potential attack surfaces. +3. Conduct vulnerability scanning using tools like OWASP ZAP and Burp Suite to identify common web vulnerabilities. +4. Attempt to exploit the identified vulnerabilities, such as SQL injection or XSS, to gain unauthorized access or extract sensitive data. +5. Document the findings, including the steps taken, vulnerabilities discovered, and the potential impact of each vulnerability. +6. Provide recommendations for remediation and security best practices based on the penetration testing results. +```mermaid +graph LR + A[Deploy Vulnerable Web Application] --> B[Perform Reconnaissance] + B --> C[Conduct Vulnerability Scanning] + C --> D[Exploit Identified Vulnerabilities] + D --> E[Document Findings] + E --> F[Provide Remediation Recommendations] +``` +### Scenario 3: Malware Analysis and Reverse Engineering +Objective: Analyze a malware sample to understand its behavior and develop detection and mitigation strategies. + +Steps: +1. Obtain a malware sample from a trusted source or create a custom malware binary for analysis. +2. Perform static analysis on the malware sample using tools like IDA Pro or Ghidra to examine its code structure and identify suspicious functions. +3. Conduct dynamic analysis by executing the malware in an isolated container and monitoring its behavior using tools like Process Monitor and Wireshark. +4. Analyze the malware's interactions with the file system, registry, and network to understand its functionality and persistence mechanisms. +5. Reverse engineer the malware using a debugger like x64dbg to understand its internal logic and identify any obfuscation techniques. +6. Develop YARA rules or other detection signatures based on the identified characteristics of the malware. +7. Propose mitigation strategies, such as network segregation, application whitelisting, and endpoint protection, to defend against the analyzed malware. +```mermaid +graph LR + A[Obtain Malware Sample] --> B[Perform Static Analysis] + B --> C[Conduct Dynamic Analysis] + C --> D[Analyze Malware Interactions] + D --> E[Reverse Engineer Malware] + E --> F[Develop Detection Signatures] + F --> G[Propose Mitigation Strategies] +``` +## Conclusion +The cybersecurity lab setup described in this guide provides a comprehensive and flexible environment for practicing and developing a wide range of cybersecurity skills. By leveraging Docker and Active Directory integration, the lab offers a realistic and manageable platform for simulating various security scenarios, analyzing threats, and testing defense mechanisms. + +Through the different learning paths and example scenarios, readers can gain hands-on experience in network security, web application security, incident response, forensics, and malware analysis. The lab environment enables readers to explore and experiment with industry-standard tools and techniques, enhancing their practical skills and understanding of real-world cybersecurity challenges. + +By following the step-by-step instructions and best practices outlined in this guide, readers can build a robust and customizable cybersecurity lab that adapts to their learning objectives and evolving security landscape. The modular nature of the lab allows for easy expansion and integration of additional security tools and scenarios as needed. + +Remember to continuously update and refine the lab environment, stay informed about the latest security threats and techniques, and engage with the cybersecurity community to share knowledge and collaborate on new challenges. + +Happy learning and secure coding! \ No newline at end of file diff --git a/tech_docs/networking/firewalls.md b/tech_docs/networking/firewalls.md new file mode 100644 index 0000000..e51ba3d --- /dev/null +++ b/tech_docs/networking/firewalls.md @@ -0,0 +1,420 @@ + + +--- + +Certainly! Let's consider a more complex, real-world enterprise scenario and compare the configuration steps for Palo Alto Networks and Fortinet FortiGate firewalls. + +Scenario: +- The enterprise has multiple web servers hosting different applications, each requiring inbound HTTPS access (port 443) from specific source networks. +- The web servers are located in a DMZ network (192.168.10.0/24) behind the firewall. +- The firewall should perform NAT to translate public IP addresses to the respective web servers' private IP addresses. +- The firewall should enforce security policies to inspect HTTPS traffic for potential threats and apply application-specific rules. + +Solution 1: Palo Alto Networks + +Step 1: Configure NAT rules for each web server. +``` +set rulebase nat rules +set name "NAT_Web_Server_1" +set source any +set destination +set service any +set translate-to + +set rulebase nat rules +set name "NAT_Web_Server_2" +set source any +set destination +set service any +set translate-to +``` + +Step 2: Create security zones and assign interfaces. +``` +set network interface ethernet1/1 layer3 interface-management-profile none zone untrust +set network interface ethernet1/2 layer3 interface-management-profile none zone dmz +set zone dmz network layer3 [ ethernet1/2 ] +``` + +Step 3: Define security policies for each web server. +``` +set rulebase security rules +set name "Allow_HTTPS_Web_Server_1" +set from untrust +set to dmz +set source +set destination +set application ssl +set service application-default +set action allow +set profile-setting profiles virus default spyware default vulnerability default url-filtering default + +set rulebase security rules +set name "Allow_HTTPS_Web_Server_2" +set from untrust +set to dmz +set source +set destination +set application ssl +set service application-default +set action allow +set profile-setting profiles virus default spyware default vulnerability default url-filtering default +``` + +Step 4: Configure SSL decryption and inspection. +``` +set rulebase decryption rules +set name "SSL_Inspect_Web_Servers" +set action no-decrypt +set source any +set destination [ ] +set service ssl +``` + +In this Palo Alto Networks solution, NAT rules are configured for each web server to translate the public IP addresses to their respective private IP addresses. Security zones are created, and interfaces are assigned to segregate the untrust (Internet-facing) and DMZ networks. Security policies are defined for each web server, specifying the allowed source networks, destination IP addresses, and applications (SSL). The policies also apply default security profiles for threat prevention. SSL decryption rules are configured to inspect the HTTPS traffic for potential threats. + +Solution 2: Fortinet FortiGate + +Step 1: Configure firewall addresses for the web servers. +``` +config firewall address + edit "Web_Server_1" + set subnet 192.168.10.10/32 + next + edit "Web_Server_2" + set subnet 192.168.10.20/32 + next +end +``` + +Step 2: Configure virtual IPs (VIPs) for each web server. +``` +config firewall vip + edit "VIP_Web_Server_1" + set extip + set mappedip "Web_Server_1" + set extintf "port1" + set portforward enable + set extport 443 + set mappedport 443 + next + edit "VIP_Web_Server_2" + set extip + set mappedip "Web_Server_2" + set extintf "port1" + set portforward enable + set extport 443 + set mappedport 443 + next +end +``` + +Step 3: Create firewall policies for each web server. +``` +config firewall policy + edit 1 + set name "Allow_HTTPS_Web_Server_1" + set srcintf "port1" + set dstintf "dmz" + set srcaddr + set dstaddr "VIP_Web_Server_1" + set action accept + set service "HTTPS" + set ssl-ssh-profile "deep-inspection" + set nat enable + next + edit 2 + set name "Allow_HTTPS_Web_Server_2" + set srcintf "port1" + set dstintf "dmz" + set srcaddr + set dstaddr "VIP_Web_Server_2" + set action accept + set service "HTTPS" + set ssl-ssh-profile "deep-inspection" + set nat enable + next +end +``` + +Step 4: Configure SSL deep inspection. +``` +config firewall ssl-ssh-profile + edit "deep-inspection" + set comment "SSL deep inspection" + set ssl inspect-all + set untrusted-caname "Fortinet_CA_SSL" + next +end +``` + +In the Fortinet FortiGate solution, firewall addresses are defined for each web server. Virtual IPs (VIPs) are configured to map the public IP addresses to the respective web server addresses and specify the port translation. Firewall policies are created for each web server, allowing HTTPS traffic from specific source networks to the corresponding VIPs. The policies also enable NAT and apply an SSL deep inspection profile to examine the encrypted traffic for threats. + +Comparison: +Both Palo Alto Networks and Fortinet FortiGate offer robust security features and granular control for managing inbound HTTPS traffic in an enterprise environment. However, there are differences in their configuration approaches and terminology. + +Palo Alto Networks uses a zone-based approach, where security zones are created, and interfaces are assigned to them. NAT rules and security policies are configured separately, allowing for more flexibility and control over traffic flows. Palo Alto Networks also provides a comprehensive set of security profiles for threat prevention. + +Fortinet FortiGate, on the other hand, uses a more integrated approach with firewall addresses, VIPs, and firewall policies. VIPs combine the NAT configuration with the firewall rules, simplifying the setup. Firewall policies define the allowed traffic flow and include security features like SSL deep inspection. + +Both firewalls offer advanced security features, such as SSL decryption and inspection, to detect and prevent threats in encrypted traffic. They also provide granular control over source and destination networks, applications, and services. + +When choosing between Palo Alto Networks and Fortinet FortiGate for an enterprise environment, factors like the organization's security requirements, existing network infrastructure, ease of management, and integration with other security tools should be considered. + +In summary, this real-world enterprise scenario demonstrates the configuration steps for allowing inbound HTTPS traffic to multiple web servers using Palo Alto Networks and Fortinet FortiGate firewalls. While both firewalls provide comprehensive security features, their configuration approaches and terminology differ, reflecting their unique architectures and philosophies. + +--- + +Certainly! Here's a reference guide for how each OEM (Cisco ASA, Fortinet FortiGate, Palo Alto Networks, and Cisco Meraki MX) performs the core firewall tasks (traffic filtering, NAT, VPN, and threat prevention) via CLI: + +1. Traffic Filtering + a. Cisco ASA: + - Configure access-list: `access-list ` + - Apply access-list to interface: `access-group interface ` + + b. Fortinet FortiGate: + - Configure firewall policy: `config firewall policy` + - Set policy details: `edit `, `set srcintf `, `set dstintf `, `set srcaddr `, `set dstaddr `, `set service `, `set action ` + + c. Palo Alto Networks: + - Configure security rule: `set rulebase security rules` + - Set rule details: `set name `, `set from `, `set to `, `set source `, `set destination `, `set service `, `set action ` + + d. Cisco Meraki MX (via Dashboard): + - Configure firewall rule in the Meraki Dashboard: + - Navigate to Security & SD-WAN > Configure > Firewall + - Click "Add a Rule" and set the rule details (source, destination, service, action) + +2. Network Address Translation (NAT) + a. Cisco ASA: + - Configure static NAT: `nat (,) source static ` + - Configure dynamic NAT: `nat (,) source dynamic ` + + b. Fortinet FortiGate: + - Configure SNAT: `config firewall ippool`, `edit `, `set startip `, `set endip ` + - Apply SNAT to policy: `config firewall policy`, `edit `, `set ippool enable`, `set poolname ` + + c. Palo Alto Networks: + - Configure NAT rule: `set rulebase nat rules` + - Set rule details: `set name `, `set source `, `set destination `, `set service `, `set source-translation dynamic-ip-and-port ` + + d. Cisco Meraki MX (via Dashboard): + - Configure NAT in the Meraki Dashboard: + - Navigate to Security & SD-WAN > Configure > NAT + - Click "Add a Rule" and set the rule details (source, destination, service, translation type) + +3. Virtual Private Network (VPN) + a. Cisco ASA: + - Configure IKEv1 policy: `crypto ikev1 policy `, `authentication pre-share`, `encryption `, `hash `, `group `, `lifetime ` + - Configure IPsec transform set: `crypto ipsec transform-set ` + - Configure tunnel group: `tunnel-group type ipsec-l2l`, `tunnel-group ipsec-attributes`, `pre-shared-key ` + - Configure crypto map: `crypto map ipsec-isakmp`, `set peer `, `set transform-set `, `set pfs `, `match address ` + + b. Fortinet FortiGate: + - Configure Phase 1 (IKE): `config vpn ipsec phase1-interface`, `edit `, `set interface `, `set remote-gw `, `set proposal --` + - Configure Phase 2 (IPsec): `config vpn ipsec phase2 + +-interface`, `edit `, `set phase1name `, `set proposal --` + - Configure firewall policy for VPN: `config firewall policy`, `edit `, `set srcintf `, `set dstintf `, `set srcaddr `, `set dstaddr `, `set action ipsec`, `set schedule always`, `set service ANY`, `set inbound enable`, `set outbound enable` + + c. Palo Alto Networks: + - Configure IKE gateway: `set network ike gateway `, `set address `, `set authentication pre-shared-key `, `set local-address `, `set protocol ikev1` + - Configure IPsec tunnel: `set network tunnel ipsec `, `set auto-key ike-gateway `, `set auto-key ipsec-crypto-profile ` + - Configure IPsec crypto profile: `set network ipsec crypto-profiles `, `set esp encryption `, `set esp authentication ` + - Configure security policy for VPN: `set rulebase security rules`, `set name `, `set from `, `set to `, `set source `, `set destination `, `set application any`, `set service any`, `set action allow`, `set profile-setting profiles spyware virus ` + + d. Cisco Meraki MX (via Dashboard): + - Configure site-to-site VPN in the Meraki Dashboard: + - Navigate to Security & SD-WAN > Configure > Site-to-site VPN + - Click "Add a peer" and set the peer details (peer IP, remote subnet, pre-shared key) + - Configure the local networks to be advertised + - Configure client VPN (L2TP over IPsec) in the Meraki Dashboard: + - Navigate to Security & SD-WAN > Configure > Client VPN + - Enable client VPN and set the authentication details (pre-shared key, client IP range) + +4. Threat Prevention + a. Cisco ASA with FirePOWER Services: + - Configure access control policy: `access-control-policy`, `edit `, `rule add `, `action `, `source `, `destination `, `port `, `application `, `intrusion-policy `, `file-policy `, `logging ` + + b. Fortinet FortiGate: + - Configure antivirus profile: `config antivirus profile`, `edit `, `set comment `, `set inspection-mode `, `set ftgd-analytics ` + - Configure IPS sensor: `config ips sensor`, `edit `, `set comment `, `set block-malicious-url `, `set extended-log ` + - Apply antivirus and IPS profiles to firewall policy: `config firewall policy`, `edit `, `set av-profile `, `set ips-sensor ` + + c. Palo Alto Networks: + - Configure antivirus profile: `set deviceconfig system profiles anti-virus `, `set threat-prevention packet-capture `, `set action ` + - Configure anti-spyware profile: `set deviceconfig system profiles spyware `, `set threat-prevention packet-capture `, `set action ` + - Configure vulnerability protection profile: `set deviceconfig system profiles vulnerability `, `set threat-prevention packet-capture `, `set action ` + - Attach profiles to security policy: `set rulebase security rules`, `set name `, `set profile-setting profiles spyware virus vulnerability ` + + d. Cisco Meraki MX (via Dashboard): + - Configure threat protection in the Meraki Dashboard: + - Navigate to Security & SD-WAN > Configure > Threat Protection + - Enable intrusion detection and prevention (IDS/IPS) and set the security level + - Enable advanced malware protection (AMP) and set the detection and blocking options + - Configure URL filtering and set the content categories to be blocked + +This reference guide provides a high-level overview of how to configure core firewall tasks using the CLI for each OEM. Keep in mind that the exact commands and syntax may vary depending on the specific device model and software version. It's always recommended to refer to the official documentation and command references provided by the respective vendors for the most accurate and up-to-date information. + +Introduction + +Firewalls are essential components of network security, serving as the first line of defense against external threats and unauthorized access. They enforce security policies by controlling the flow of network traffic based on predefined rules and criteria. The effectiveness and functionality of a firewall depend heavily on how it implements key features such as traffic filtering, Network Address Translation (NAT), Virtual Private Network (VPN), and threat prevention. + +Traffic filtering is the foundation of firewall functionality. It involves inspecting incoming and outgoing network packets and making decisions based on factors like source and destination IP addresses, ports, protocols, and application-level data. Firewalls use various techniques for traffic filtering, such as stateful inspection, which maintains the state of network connections and allows for more granular control. According to a 2021 report by Grand View Research, the global network security firewall market size was valued at USD 4.3 billion in 2020 and is expected to grow at a compound annual growth rate (CAGR) of 12.1% from 2021 to 2028, highlighting the importance of effective traffic filtering in modern networks. + +Network Address Translation (NAT) is a critical feature that allows firewalls to mask the internal network structure and conserve public IP addresses. NAT enables multiple devices on a private network to share a single public IP address, enhancing security and simplifying network configuration. Firewalls support different types of NAT, such as static NAT, dynamic NAT, and Port Address Translation (PAT). A study by Cisco found that NAT can help organizations save up to 50% on public IP address costs while improving network security and manageability. + +Virtual Private Network (VPN) capabilities are essential for securing remote access and enabling secure communication between disparate network segments. Firewalls support various VPN technologies, such as IPsec, SSL/TLS, and PPTP, each with its own advantages and trade-offs. According to a 2021 report by Global Market Insights, the global VPN market size exceeded USD 30 billion in 2020 and is projected to grow at a CAGR of over 15% from 2021 to 2027, driven by the increasing demand for secure remote access solutions. + +Threat prevention is an increasingly important aspect of modern firewalls, as they evolve beyond simple packet filtering to become comprehensive security gateways. Firewalls employ various techniques to detect and block advanced threats, such as intrusion prevention systems (IPS), malware scanning, URL filtering, and sandboxing. A 2021 report by MarketsandMarkets projects that the global threat intelligence market size will grow from USD 11.6 billion in 2021 to USD 15.8 billion by 2026, at a CAGR of 6.3%, underlining the importance of robust threat prevention capabilities in firewalls. + +In the following sections, we will examine how four leading firewall vendors—Cisco ASA, Fortinet FortiGate, Palo Alto Networks, and Cisco Meraki MX—implement these core functionalities. By delving into the technical specifics and underlying mechanisms of each solution, this comparative analysis aims to provide a comprehensive understanding of their capabilities, strengths, and differences. This knowledge is crucial for organizations seeking to make informed decisions when selecting and configuring firewall solutions to align with their specific security requirements and network architectures. + +--- + +You're right in observing that fundamentally, all firewall platforms—whether Cisco ASA, Fortinet FortiGate, Palo Alto Networks, Cisco Meraki MX, or others—serve the same core purpose: to protect networks by managing and controlling the flow of traffic based on defined security rules. They achieve these objectives through mechanisms that might differ in terminology or implementation details but ultimately perform similar functions. Here’s a simplified abstraction of how these firewalls operate, focusing on their common functionalities: + +### Core Functions of Firewalls: +1. **Traffic Filtering:** All firewall technologies employ some form of traffic filtering, whether they're using ACLs (Access Control Lists), security policies, or unified threat management rules. They decide whether to block or allow traffic based on source and destination IP addresses, port numbers, and other protocol-specific characteristics. + +2. **Network Address Translation (NAT):** This is a universal feature across firewalls used to mask the internal IP addresses of a network from the external world. The terminology and specific capabilities (like static NAT, dynamic NAT, PAT) might vary, but the fundamental purpose remains to facilitate secure communication between internal and external networks. + +3. **VPN Support:** Virtual Private Networks (VPNs) are supported by all major firewall platforms, though the implementations (IPSec, SSL VPN, etc.) and the specific features (like remote access VPN and site-to-site VPN) might differ. The end goal is to securely extend a network’s reach over the internet. + +4. **User and Application Control:** Modern firewalls go beyond traditional packet filtering by integrating user and application-level visibility and control. Technologies like Palo Alto’s App-ID and User-ID or similar features in other platforms enable more granular control based on application traffic and user identity, respectively. + +5. **Threat Prevention:** Firewalls are increasingly incorporating integrated threat prevention tools that include IDS/IPS (Intrusion Detection and Prevention Systems), anti-malware, and URL filtering. These features help to identify and mitigate threats before they can penetrate deeper into the network. + +### Terminology Differences: +- **Cisco ASA** might refer to its filtering mechanism as access groups and ACLs, whereas **Palo Alto** would discuss it in terms of security policies that integrate with application and user IDs. +- **Fortinet** integrates NAT within their security policies, making it a bit more straightforward in terms of policy management, compared to **Cisco ASA**, where NAT and security policies might be configured separately. +- **Palo Alto** and **Fortinet** emphasize application-level insights and controls, using terms like App-ID and NGFW (Next-Generation Firewall) features, which might not be explicitly named in the simpler, more traditional configurations of older Cisco ASA models. + +Despite these differences in terminology and certain proprietary technologies, the underlying principles of how these firewalls operate remain largely consistent. They all aim to secure network environments through a combination of packet filtering, user and application control, and threat mitigation techniques, adapting these basic functions to modern network demands and threats in slightly different ways to cater to various organizational needs. + +--- + +### Introduction +Choosing the right firewall solution is crucial for protecting an organization's network infrastructure. Firewalls not only block unauthorized access but also provide a control point for traffic entering and exiting the network. This comparative analysis examines Cisco ASA, Fortinet FortiGate, and Palo Alto firewalls, focusing on their approaches to firewall policy and NAT configurations, helping organizations select the best fit based on specific needs and network environments. + +### Firewall Policy Configuration +#### **Cisco ASA** +- **Approach**: Utilizes access control lists (ACLs) and access groups for detailed traffic management. +- **Key Features**: High granularity allows for precise control, which is essential in complex network setups needing stringent security measures. + +#### **Fortinet FortiGate** +- **Approach**: Adopts an integrated policy system that combines addresses, services, and actions. +- **User Experience**: Simplifies configuration, making it suitable for environments that require quick setup and changes. + +#### **Palo Alto Networks** +- **Approach**: Employs a comprehensive strategy using zones and profiles, focusing on controlling traffic based on applications and users. +- **Key Features**: Includes User-ID and App-ID technologies that enhance security by enabling policy enforcement based on user identity and application traffic, ensuring that security measures are both stringent and adaptable to organizational needs. + +### NAT Configuration +#### **Overview** +Network Address Translation (NAT) is crucial for hiding internal IP addresses and managing the IP routing between internal and external networks. It is a fundamental security feature that also optimizes the use of IP addresses. + +#### **Cisco ASA** +- **Flexibility**: Offers robust options for static and dynamic NAT, catering to complex network requirements. + +#### **Fortinet FortiGate** +- **Integration**: Features an intuitive setup where NAT configurations are integrated within firewall policies, facilitating easier management and visibility. + +#### **Palo Alto Networks** +- **Innovation**: Provides versatile NAT options that are tightly integrated with security policies, supporting complex translations including bi-directional NAT for detailed traffic control. + +### Comparative Summary +#### **Performance and Scalability** +- **Cisco ASA** is known for its stability and robust performance, handling high-volume traffic effectively. +- **Fortinet FortiGate** and **Palo Alto Networks** both excel in environments that scale dynamically, offering solutions that adapt quickly to changing network demands. + +#### **Integration with Other Security Tools** +- All three platforms offer extensive integrations with additional security tools such as SIEM systems, intrusion prevention systems (IPS), and endpoint protection, enhancing overall security architecture. + +#### **Cost and Licensing** +- **Cisco ASA** often involves a straightforward, albeit sometimes costly, licensing structure. +- **Fortinet FortiGate** typically provides a cost-effective solution with flexible licensing options. +- **Palo Alto Networks** may involve higher costs but justifies them with advanced features and comprehensive security coverage. + +### Conclusion +Selecting the right firewall is a pivotal decision that depends on specific organizational requirements including budget, expected traffic volume, administrative expertise, and desired security level. This analysis highlights the distinct capabilities and configurations of Cisco ASA, Fortinet FortiGate, and Palo Alto Networks, guiding organizations towards making an informed choice that aligns with their security needs and operational preferences. + +--- + +### 4. Cisco Meraki MX +- **Models Covered**: Meraki MX64, MX84, MX100, MX250 +- **Throughput**: + - **Firewall Throughput**: Up to 4 Gbps + - **VPN Throughput**: Up to 1 Gbps +- **Concurrent Sessions**: Up to 2,000,000 +- **VPN Support**: + - **Protocols**: Auto VPN (IPSec), L2TP over IPSec + - **Remote Access VPN**: Client VPN (L2TP over IPSec) +- **NAT Features**: + - **1:1 NAT, 1:Many NAT** + - **Port forwarding, and DMZ host** +- **Security Features**: + - **Threat Defense**: Integrated intrusion detection and prevention (IDS/IPS) + - **Content Filtering**: Native content filtering, categories-based + - **Access Control**: User and device-based policies +- **Deployment**: + - **Cloud Managed**: Entirely managed via the cloud, simplifying large-scale deployments and remote management. + - **Zero-Touch Deployment**: Fully supported +- **Special Features**: + - **SD-WAN Capabilities**: Advanced SD-WAN policy-based routing integrates with auto VPN for dynamic path selection. + +### 5. SELinux (Security-Enhanced Linux) +- **Base**: Linux Kernel modification +- **Main Use**: Enforcing mandatory access controls (MAC) to enhance the security of Linux systems. +- **Operation Mode**: + - **Enforcing**: Enforces policies and denies access based on policy rules. + - **Permissive**: Logs policy violations but does not enforce them. + - **Disabled**: SELinux functionality turned off. +- **Security Features**: + - **Type Enforcement**: Controls access based on type attributes attached to each subject and object. + - **Role-Based Access Control (RBAC)**: Users perform operations based on roles, which govern the types of operations allowable. + - **Multi-Level Security (MLS)**: Adds sensitivity labels on objects for handling varying levels of security. +- **Deployment**: + - **Compatibility**: Compatible with most major distributions of Linux. + - **Management Tools**: Various tools available for policy management, including `semanage`, `setroubleshoot`, and graphical interfaces like `system-config-selinux`. +- **Advantages**: + - **Granular Control**: Provides very detailed and customizable security policies. + - **Audit and Compliance**: Excellent support for audit and compliance requirements with comprehensive logging. + + Here are the additional fact sheets for AppArmor, a Linux security module, and typical VPN technologies used within Linux environments: + +--- + +### 6. AppArmor (Application Armor) +- **Base**: Linux Kernel security module similar to SELinux +- **Main Use**: Provides application security by enabling administrators to confine programs to a limited set of resources, based on per-program profiles. +- **Operation Mode**: + - **Enforce Mode**: Enforces all rules defined in the profiles and restricts access accordingly. + - **Complain Mode**: Does not enforce rules but logs all violations. +- **Security Features**: + - **Profile-Based Access Control**: Each application can have a unique profile that specifies its permissions, controlling file access, capabilities, network access, and other resources. + - **Ease of Configuration**: Generally considered easier to configure and maintain than SELinux due to its more straightforward syntax and profile management. +- **Deployment**: + - **Compatibility**: Integrated into many Linux distributions, including Ubuntu and SUSE. + - **Management Tools**: `aa-genprof` for generating profiles, `aa-enforce` to switch profiles to enforce mode, and `aa-complain` to set profiles to complain mode. +- **Advantages**: + - **Simplicity and Accessibility**: Less complex than SELinux, making it more accessible for less experienced administrators. + - **Flexibility**: Offers effective containment and security without the extensive configuration SELinux may require. + +### 7. Linux VPN Technologies +- **Common Solutions**: + - **OpenVPN**: A robust and highly configurable VPN solution that uses SSL/TLS for key exchange. It is capable of traversing network address translators (NATs) and firewalls. + - **WireGuard**: A newer, simpler, and faster approach to VPN that integrates more directly into the Linux kernel, offering better performance than older protocols. + - **IPSec/L2TP**: Often used in corporate environments, IPSec is used with L2TP to provide encryption at the network layer. +- **Throughput and Performance**: + - **OpenVPN**: Good performance with strong encryption. Suitable for most consumer and many enterprise applications. + - **WireGuard**: Exceptional performance, particularly in terms of connection speed and reconnection times over mobile networks. +- **Security Features**: + - **OpenVPN**: High security with configurable encryption methods. Supports various authentication mechanisms including certificates, pre-shared keys, and user authentication. + - **WireGuard**: Uses state-of-the-art cryptography and aims to be as easy to configure and deploy as SSH. +- **Deployment**: + - **Configuration**: Both OpenVPN and WireGuard offer easy-to-use CLI tools and are supported by a variety of GUIs across Linux distributions. + - **Compatibility**: Supported across a wide range of devices and Linux distributions. +- **Advantages**: + - **OpenVPN**: Wide adoption, extensive documentation, and strong community support. + - **WireGuard**: Modern cryptographic techniques, minimalistic design, and kernel-level integration for optimal performance. \ No newline at end of file diff --git a/tech_docs/networking/home_network.md b/tech_docs/networking/home_network.md new file mode 100644 index 0000000..d40eee6 --- /dev/null +++ b/tech_docs/networking/home_network.md @@ -0,0 +1,49 @@ +To provide a comprehensive turnkey solution for a power user's home network leveraging OPNsense with zero-trust principles, VLAN segmentation, and advanced WAN management, we'll break down the network architecture into a detailed plan. This plan includes VLAN allocation, device roles, and how traffic is managed across WAN links. + +### Network Overview: + +- **WAN Links**: + - **WAN1 (Comcast)**: Primary internet connection, suitable for sensitive or work-related traffic. Limited by a data cap. + - **WAN2 (T-Mobile 5G)**: Secondary internet connection, unlimited data but CGNAT. Ideal for high-bandwidth or background tasks. + +- **VLANs & Segmentation**: + - **VLAN 10 - Management**: For network infrastructure devices (switches, APs, OPNsense management). + - **VLAN 20 - Work & Personal**: For personal computers, workstations, and laptops. + - **VLAN 30 - IoT Devices**: For smart home devices, like smart bulbs, thermostats, and speakers. + - **VLAN 40 - Entertainment**: For streaming devices, gaming consoles, and smart TVs. + - **VLAN 50 - Guests**: For guests' devices, providing internet access with isolated access to local resources. + +- **Special Configurations**: + - **802.1x Authentication**: Enabled on VLAN 20 for secure access. + - **VPN & SOCKS5**: Configured for selective routing of traffic from VLAN 20 and 40 through NordVPN or a SOCKS5 proxy. + +### Network Diagram: + +```mermaid +graph LR + Comcast(WAN1 - Comcast) -->|Primary| OPNsense + TMobile(WAN2 - T-Mobile 5G) -->|Secondary| OPNsense + OPNsense -->|Management VLAN10| SwitchAP[Switch & APs] + OPNsense -->|Work/Personal VLAN20| PC[PCs/Laptops] + OPNsense -->|IoT VLAN30| IoT[Smart Devices] + OPNsense -->|Entertainment VLAN40| TV[Streaming/Consoles] + OPNsense -->|Guest VLAN50| Guests[Guest Devices] + PC -->|VPN/SOCKS5| Cloud[VPN & SOCKS5] + TV -->|VPN| Cloud +``` + +### Device Roles and Policies: + +- **Management (VLAN 10)**: Secure VLAN for managing networking equipment. Access restricted to network administrators. +- **Work & Personal (VLAN 20)**: High-priority VLAN for workstations and personal devices. Protected by 802.1x authentication. Selected traffic routed through VPN or SOCKS5 for privacy or geo-restrictions. +- **IoT Devices (VLAN 30)**: Isolated VLAN for IoT devices to enhance security. Internet access allowed, but access to other VLANs restricted. +- **Entertainment (VLAN 40)**: Dedicated VLAN for entertainment devices. Selected traffic can be routed through VPN for content access or privacy. +- **Guests (VLAN 50)**: VLAN for guest devices, providing internet access only with no access to the internal network. + +### Policies: + +- **Traffic Shaping & QoS**: Implemented on VLAN 20 and 40 to prioritize critical traffic (e.g., work-related applications, streaming). +- **Intrusion Detection & Prevention**: Enabled network-wide with tailored rules for IoT and guest VLANs to prevent unauthorized access and mitigate threats. +- **Multi-WAN Rules**: IoT and guest traffic primarily routed through WAN2 (T-Mobile 5G) to conserve WAN1 (Comcast) bandwidth under the data cap. + +This plan provides a solid foundation for a secure, segmented home network, incorporating zero-trust principles and advanced routing to manage traffic across multiple WAN links effectively. It's customizable based on specific devices, user needs, and network policies, offering a starting point for a sophisticated home networking setup. \ No newline at end of file diff --git a/tech_docs/networking/vpn-torrent.md b/tech_docs/networking/vpn-torrent.md new file mode 100644 index 0000000..b87b3a2 --- /dev/null +++ b/tech_docs/networking/vpn-torrent.md @@ -0,0 +1,118 @@ +1. **Volume Paths:** Make sure the paths specified in the `volumes` sections are correct and accessible by Docker. These paths are where your configuration, downloads, and other data will be stored. + +2. **Network Mode:** You've correctly set the network mode of the dependent services (`qbittorrent`, `jackett`, `sonarr`, `prowlarr`) to `service:gluetun`. This ensures that they use the VPN connection established by the `gluetun` service. + +3. **Environment Variables:** + - `VPN_SERVICE_PROVIDER`, `VPN_TYPE`, and related settings in `gluetun` should be correctly configured for your VPN provider. + - Replace `` with your actual WireGuard private key. + - `PUID` and `PGID` should match your user ID and group ID on the host system. + +4. **Ports:** Ensure that the ports you've exposed are open and not being used by other services on your host system. This is important for accessing the web UIs of the respective services. + +5. **Time Zone:** You've set the time zone for most services to `Etc/UTC`. Ensure this is what you intend, or adjust it to your local time zone. + +6. **Restart Policy:** You've set a restart policy for each service, which is good for ensuring they automatically restart in case of a crash or after a reboot. + +7. **Service Dependencies:** The `depends_on` section for services like `qbittorrent` is correctly set to depend on `gluetun`. This ensures `gluetun` starts before them. + +8. **Configuration and Data Persistence:** Your setup indicates that configuration and data will be persisted on the host file system in the `/home/ubuntu/docker/arr-stack/` directory. This is good practice for data persistence. + +Before deploying, make sure to replace any placeholders (like ``) with actual values. Once everything is set up, you can start your Docker Compose stack with: + +```bash +docker-compose up -d +``` + +This command will start all services in the background. You can then access the web interfaces of qBittorrent, Jackett, Sonarr, and Prowlarr through the respective ports you've configured. + +```yaml +--- +version: "3" +services: + gluetun: + image: qmcgaw/gluetun + container_name: gluetun + # line above must be uncommented to allow external containers to connect. + # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/connect-a-container-to-gluetun.md#external-container-to-gluetun + cap_add: + - NET_ADMIN + devices: + - /dev/net/tun:/dev/net/tun + ports: + - 6881:6881 + - 6881:6881/udp + - 8085:8085 # qbittorrent + - 9117:9117 # Jackett + - 8989:8989 # Sonarr + - 9696:9696 # Prowlarr + volumes: + - /home/ubuntu/docker/arr-stack:/gluetun + environment: + # See https://github.com/qdm12/gluetun-wiki/tree/main/setup#setup + - VPN_SERVICE_PROVIDER=nordvpn + - VPN_TYPE=wireguard + # OpenVPN: + # - OPENVPN_USER= + # - OPENVPN_PASSWORD= + # Wireguard: + - WIREGUARD_PRIVATE_KEY= # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/providers/nordvpn.md#obtain-your-wireguard-private-key + - WIREGUARD_ADDRESSES=10.5.0.2/32 + # Timezone for accurate log times + - TZ=Europe/London + # Server list updater + # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list + - UPDATER_PERIOD=24h + qbittorrent: + image: lscr.io/linuxserver/qbittorrent + container_name: qbittorrent + network_mode: "service:gluetun" + environment: + - PUID=1000 + - PGID=1000 + - TZ=Europe/London + - WEBUI_PORT=8085 + volumes: + - /home/ubuntu/docker/arr-stack/qbittorrent:/config + - /home/ubuntu/docker/arr-stack/qbittorrent/downloads:/downloads + depends_on: + - gluetun + restart: always + jackett: + image: lscr.io/linuxserver/jackett:latest + container_name: jackett + network_mode: "service:gluetun" + environment: + - PUID=1000 + - PGID=1000 + - TZ=Etc/UTC + - AUTO_UPDATE=true #optional + - RUN_OPTS= #optional + volumes: + - /home/ubuntu/docker/arr-stack/jackett/data:/config + - /home/ubuntu/docker/arr-stack/jackett/blackhole:/downloads + restart: unless-stopped + sonarr: + image: lscr.io/linuxserver/sonarr:latest + container_name: sonarr + network_mode: "service:gluetun" + environment: + - PUID=1000 + - PGID=1000 + - TZ=Etc/UTC + volumes: + - /home/ubuntu/docker/arr-stack/sonarr/data:/config + - /home/ubuntu/docker/arr-stack/sonarr/tvseries:/tv #optional + - /home/ubuntu/docker/arr-stack/sonarr/downloadclient-downloads:/downloads #optional + restart: unless-stopped + prowlarr: + image: lscr.io/linuxserver/prowlarr:latest + container_name: prowlarr + network_mode: "service:gluetun" + environment: + - PUID=1000 + - PGID=1000 + - TZ=Etc/UTC + volumes: + - /home/ubuntu/docker/arr-stack/prowlarr/data:/config + restart: unless-stopped + ``` \ No newline at end of file diff --git a/tech_docs/virtualization/proxmox.md b/tech_docs/virtualization/proxmox.md new file mode 100644 index 0000000..ecb1d74 --- /dev/null +++ b/tech_docs/virtualization/proxmox.md @@ -0,0 +1,117 @@ +Great! It seems like you're working on an advanced use case of Linux containers using Proxmox. The provided LXC and cgroups administration reference guide is a solid foundation for understanding and managing containers. + +When it comes to using LXC with Proxmox, there are a few additional considerations and Proxmox-specific commands that can be included in your guide. Here are some suggestions to enhance your guide: + +1. Installing Proxmox + - Provide instructions on installing Proxmox on a bare-metal server or a virtual machine. + - Cover the system requirements and installation process specific to Proxmox. + +2. Creating and Managing Containers in Proxmox + - Explain how to create containers using the Proxmox web interface or command-line tools. + - Provide examples of creating containers from ISO images or templates. + - Cover container configuration options available in Proxmox, such as resource allocation, network settings, and storage. + +3. Proxmox-specific Container Management Commands + - Introduce Proxmox-specific commands for managing containers, such as: +To organize the Proxmox commands effectively, we can group them into categories based on their function. Here's a structured layout to help you easily navigate and understand the usage of each command: + +### 1. Container Lifecycle Management +Commands related to creating, managing, and destroying containers. +- **Create and Clone** + - `pct create [OPTIONS]` + - `pct clone [OPTIONS]` +- **Start and Stop** + - `pct start [OPTIONS]` + - `pct stop [OPTIONS]` + - `pct shutdown [OPTIONS]` + - `pct suspend ` + - `pct resume ` + - `pct reboot [OPTIONS]` +- **Removal and Cleanup** + - `pct destroy [OPTIONS]` + - `pct template ` + - `pct restore [OPTIONS]` + +### 2. Container Configuration and Information +Commands for configuring containers and fetching their information. +- **Configuration** + - `pct config [OPTIONS]` + - `pct set [OPTIONS]` +- **Information and Listing** + - `pct list` + - `pct status [OPTIONS]` + - `pct pending ` + +### 3. Snapshot Management +Commands related to managing snapshots of containers. +- `pct snapshot [OPTIONS]` +- `pct listsnapshot ` +- `pct delsnapshot [OPTIONS]` +- `pct rollback [OPTIONS]` + +### 4. Storage and Volume Management +Commands for managing the storage and volumes of containers. +- **Volume Operations** + - `pct move-volume [] [] [] [OPTIONS]` + - `pct resize [OPTIONS]` + - `pct pull [OPTIONS]` + - `pct push [OPTIONS]` +- **Filesystem Operations** + - `pct mount ` + - `pct unmount ` + - `pct fsck [OPTIONS]` + - `pct fstrim [OPTIONS]` + +### 5. Migration and Remote Management +Commands for moving containers and interacting remotely. +- **Migration** + - `pct migrate [OPTIONS]` + - `pct remote-migrate [] --target-bridge --target-storage [OPTIONS]` +- **Remote Interaction** + - `pct console [OPTIONS]` + - `pct enter [OPTIONS]` + - `pct exec [] [OPTIONS]` + +### 6. System Utilities and Miscellaneous +Commands related to system-level operations and utilities. +- `pct cpusets` +- `pct df ` +- `pct rescan [OPTIONS]` +- `pct unlock ` + +### 7. Help and Documentation +- `pct help [] [OPTIONS]` + +This categorization should help you find the appropriate command more quickly based on the task you need to perform with your Proxmox container. + + - Explain the syntax and provide examples of using these commands. + +4. Configuring Container Resources in Proxmox + - Describe how to configure container resources, such as CPU, memory, and disk, using Proxmox's web interface or command-line tools. + - Cover the use of Proxmox's "Cores," "Memory," and "Disk" configuration options. + - Explain how Proxmox leverages cgroups for resource management and isolation. + +5. Networking in Proxmox Containers + - Discuss the networking options available for containers in Proxmox, such as bridged, NAT, and VLAN modes. + - Provide examples of configuring network interfaces and firewall rules for containers. + +6. Storage Management for Containers + - Explain how to manage storage for containers in Proxmox, including creating and attaching storage volumes. + - Cover the different storage types supported by Proxmox, such as local storage, network storage (NFS, iSCSI), and distributed storage (Ceph). + +7. Backup and Restoration of Containers + - Provide instructions on backing up and restoring containers using Proxmox's built-in backup tools. + - Explain how to schedule regular backups and configure retention policies. + +8. Monitoring and Troubleshooting + - Discuss the monitoring features available in Proxmox for containers, such as resource usage graphs and logs. + - Provide troubleshooting tips specific to Proxmox containers, such as common error messages and their solutions. + +9. Advanced Topics + - Cover advanced topics relevant to your use case, such as: + - Clustering and high availability for containers. + - Integration with other tools and services (e.g., Kubernetes, Docker). + - Performance tuning and optimization. + - Security best practices for Proxmox containers. + +By incorporating these Proxmox-specific elements into your guide, you'll provide a comprehensive resource for advanced Linux container usage with Proxmox. Make sure to include relevant commands, configuration examples, and best practices throughout the guide to make it practical and easy to follow. \ No newline at end of file diff --git a/tech_docs/virtualization/proxmox_dhcp.md b/tech_docs/virtualization/proxmox_dhcp.md new file mode 100644 index 0000000..f09e308 --- /dev/null +++ b/tech_docs/virtualization/proxmox_dhcp.md @@ -0,0 +1,73 @@ +For your standalone Proxmox setup, switching between static and dynamic IP configurations and managing virtual bridges are important tasks. Below, I'll provide a concise guide to handle these changes effectively and safely. + +### Switching from Static IP to DHCP: + +- **Backup Configurations:** Always backup configuration files before making changes (`cp /etc/network/interfaces /etc/network/interfaces.bak`). + +```bash +cp /etc/network/interfaces /etc/network/interfaces.bak +``` + +**Update Network Interface Configuration:** +Open `/etc/network/interfaces` in a text editor: +```bash +vim /etc/network/interfaces +``` +- Change the `vmbr0` configuration from static to DHCP: +```bash +auto vmbr0 +iface vmbr0 inet dhcp + bridge-ports enp3s0 + bridge-stp off + bridge-fd 0 +``` +- Save the changes and exit the editor. + +- **Restart Networking to Apply Changes:** +- Apply the new network settings: +```bash +systemctl restart networking +``` + +- **Find the New DHCP-Assigned IP Address:** +- After the network restarts, check the assigned IP: +```bash +ip addr show vmbr0 +``` + +- **Update `/etc/hosts` with the New IP:** +- Edit the `/etc/hosts` file to replace the old static IP with the new one: +```bash +nano /etc/hosts +``` +- Modify the line with the old IP to the new one you just obtained: +```plaintext +192.168.86.62 whitebox.foxtrot.lan whitebox # Old IP +192.168.x.x whitebox.foxtrot.lan whitebox # New DHCP IP +``` +- Save and exit. + +### Creating a New Virtual Bridge (`vmbrX`): + +- **Add a New Virtual Bridge Configuration:** + - Edit `/etc/network/interfaces`: + ```bash + vim /etc/network/interfaces + ``` + - Add a new bridge configuration at the end of the file: + ```bash + auto vmbrX # Replace X with the next available number + iface vmbrX inet manual + bridge-ports none + bridge-stp off + bridge-fd 0 + ``` + - Save and exit the editor. + +- **Activate the New Bridge:** + - Restart the networking service to bring up the new bridge: + ```bash + systemctl restart networking + ``` + +### General Notes: diff --git a/tech_docs/virtualization/proxmox_docs.md b/tech_docs/virtualization/proxmox_docs.md new file mode 100644 index 0000000..2db6123 --- /dev/null +++ b/tech_docs/virtualization/proxmox_docs.md @@ -0,0 +1,2642 @@ +## Qemu/KVM Virtual Machines +```wiki + + +{{#pvedocs:qm-plain.html}} +[[Category:Reference Documentation]] + +Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a +physical computer. From the perspective of the host system where Qemu is +running, Qemu is a user program which has access to a number of local resources +like partitions, files, network cards which are then passed to an +emulated computer which sees them as if they were real devices. +A guest operating system running in the emulated computer accesses these +devices, and runs as if it were running on real hardware. For instance, you can pass +an ISO image as a parameter to Qemu, and the OS running in the emulated computer +will see a real CD-ROM inserted into a CD drive. +Qemu can emulate a great variety of hardware from ARM to Sparc, but Proxmox VE is +only concerned with 32 and 64 bits PC clone emulation, since it represents the +overwhelming majority of server hardware. The emulation of PC clones is also one +of the fastest due to the availability of processor extensions which greatly +speed up Qemu when the emulated architecture is the same as the host +architecture. +You may sometimes encounter the term KVM (Kernel-based Virtual Machine). +It means that Qemu is running with the support of the virtualization processor +extensions, via the Linux KVM module. In the context of Proxmox VE Qemu and +KVM can be used interchangeably, as Qemu in Proxmox VE will always try to load the KVM +module. +Qemu inside Proxmox VE runs as a root process, since this is required to access block +and PCI devices. +Emulated devices and paravirtualized devices +The PC hardware emulated by Qemu includes a mainboard, network controllers, +SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in +the kvm(1) man page) all of them emulated in software. All these devices +are the exact software equivalent of existing hardware devices, and if the OS +running in the guest has the proper drivers it will use the devices as if it +were running on real hardware. This allows Qemu to runs unmodified operating +systems. +This however has a performance cost, as running in software what was meant to +run in hardware involves a lot of extra work for the host CPU. To mitigate this, +Qemu can present to the guest operating system paravirtualized devices, where +the guest OS recognizes it is running inside Qemu and cooperates with the +hypervisor. +Qemu relies on the virtio virtualization standard, and is thus able to present +paravirtualized virtio devices, which includes a paravirtualized generic disk +controller, a paravirtualized network card, a paravirtualized serial port, +a paravirtualized SCSI controller, etc … +It is highly recommended to use the virtio devices whenever you can, as they +provide a big performance improvement. Using the virtio generic disk controller +versus an emulated IDE controller will double the sequential write throughput, +as measured with bonnie++(8). Using the virtio network interface can deliver +up to three times the throughput of an emulated Intel E1000 network card, as +measured with iperf(1). [See this benchmark on the KVM wiki +https://www.linux-kvm.org/page/Using_VirtIO_NIC] +Virtual Machines Settings +Generally speaking Proxmox VE tries to choose sane defaults for virtual machines +(VM). Make sure you understand the meaning of the settings you change, as it +could incur a performance slowdown, or putting your data at risk. +General Settings +General settings of a VM include +the Node : the physical server on which the VM will run +the VM ID: a unique number in this Proxmox VE installation used to identify your VM +Name: a free form text string you can use to describe the VM +Resource Pool: a logical group of VMs +OS Settings +When creating a virtual machine (VM), setting the proper Operating System(OS) +allows Proxmox VE to optimize some low level parameters. For instance Windows OS +expect the BIOS clock to use the local time, while Unix based OS expect the +BIOS clock to have the UTC time. +System Settings +On VM creation you can change some basic system components of the new VM. You +can specify which display type you want to use. +Additionally, the SCSI controller can be changed. +If you plan to install the QEMU Guest Agent, or if your selected ISO image +already ships and installs it automatically, you may want to tick the Qemu +Agent box, which lets Proxmox VE know that it can use its features to show some +more information, and complete some actions (for example, shutdown or +snapshots) more intelligently. +Proxmox VE allows to boot VMs with different firmware and machine types, namely +SeaBIOS and OVMF. In most cases you want to switch from +the default SeaBIOS to OVMF only if you plan to use +PCIe pass through. A VMs Machine Type defines the +hardware layout of the VM’s virtual motherboard. You can choose between the +default Intel 440FX or the +Q35 +chipset, which also provides a virtual PCIe bus, and thus may be desired if +one wants to pass through PCIe hardware. +Hard Disk +Bus/Controller +Qemu can emulate a number of storage controllers: +the IDE controller, has a design which goes back to the 1984 PC/AT disk +controller. Even if this controller has been superseded by recent designs, +each and every OS you can think of has support for it, making it a great choice +if you want to run an OS released before 2003. You can connect up to 4 devices +on this controller. +the SATA (Serial ATA) controller, dating from 2003, has a more modern +design, allowing higher throughput and a greater number of devices to be +connected. You can connect up to 6 devices on this controller. +the SCSI controller, designed in 1985, is commonly found on server grade +hardware, and can connect up to 14 storage devices. Proxmox VE emulates by default a +LSI 53C895A controller. +A SCSI controller of type VirtIO SCSI is the recommended setting if you aim for +performance and is automatically selected for newly created Linux VMs since +Proxmox VE 4.3. Linux distributions have support for this controller since 2012, and +FreeBSD since 2014. For Windows OSes, you need to provide an extra iso +containing the drivers during the installation. +If you aim at maximum performance, you can select a SCSI controller of type +VirtIO SCSI single which will allow you to select the IO Thread option. +When selecting VirtIO SCSI single Qemu will create a new controller for +each disk, instead of adding all disks to the same controller. +The VirtIO Block controller, often just called VirtIO or virtio-blk, +is an older type of paravirtualized controller. It has been superseded by the +VirtIO SCSI Controller, in terms of features. +Image Format +On each controller you attach a number of emulated hard disks, which are backed +by a file or a block device residing in the configured storage. The choice of +a storage type will determine the format of the hard disk image. Storages which +present block devices (LVM, ZFS, Ceph) will require the raw disk image format, +whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose +either the raw disk image format or the QEMU image format. +the QEMU image format is a copy on write format which allows snapshots, and + thin provisioning of the disk image. +the raw disk image is a bit-to-bit image of a hard disk, similar to what + you would get when executing the dd command on a block device in Linux. This + format does not support thin provisioning or snapshots by itself, requiring + cooperation from the storage layer for these tasks. It may, however, be up to + 10% faster than the QEMU image format. [See this benchmark for details + https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] +the VMware image format only makes sense if you intend to import/export the + disk image to other hypervisors. +Cache Mode +Setting the Cache mode of the hard drive will impact how the host system will +notify the guest systems of block write completions. The No cache default +means that the guest system will be notified that a write is complete when each +block reaches the physical storage write queue, ignoring the host page cache. +This provides a good balance between safety and speed. +If you want the Proxmox VE backup manager to skip a disk when doing a backup of a VM, +you can set the No backup option on that disk. +If you want the Proxmox VE storage replication mechanism to skip a disk when starting + a replication job, you can set the Skip replication option on that disk. +As of Proxmox VE 5.0, replication requires the disk images to be on a storage of type +zfspool, so adding a disk image to other storages when the VM has replication +configured requires to skip replication for this disk image. +Trim/Discard +If your storage supports thin provisioning (see the storage chapter in the +Proxmox VE guide), you can activate the Discard option on a drive. With Discard +set and a TRIM-enabled guest OS [TRIM, UNMAP, and discard +https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM’s filesystem +marks blocks as unused after deleting files, the controller will relay this +information to the storage, which will then shrink the disk image accordingly. +For the guest to be able to issue TRIM commands, you must enable the Discard +option on the drive. Some guest operating systems may also require the +SSD Emulation flag to be set. Note that Discard on VirtIO Block drives is +only supported on guests using Linux Kernel 5.0 or higher. +If you would like a drive to be presented to the guest as a solid-state drive +rather than a rotational hard disk, you can set the SSD emulation option on +that drive. There is no requirement that the underlying storage actually be +backed by SSDs; this feature can be used with physical media of any type. +Note that SSD emulation is not supported on VirtIO Block drives. +IO Thread +The option IO Thread can only be used when using a disk with the +VirtIO controller, or with the SCSI controller, when the emulated controller + type is VirtIO SCSI single. +With this enabled, Qemu creates one I/O thread per storage controller, +rather than a single thread for all I/O. This can increase performance when +multiple disks are used and each disk has its own storage controller. +CPU +A CPU socket is a physical slot on a PC motherboard where you can plug a CPU. +This CPU can then contain one or many cores, which are independent +processing units. Whether you have a single CPU socket with 4 cores, or two CPU +sockets with two cores is mostly irrelevant from a performance point of view. +However some software licenses depend on the number of sockets a machine has, +in that case it makes sense to set the number of sockets to what the license +allows you. +Increasing the number of virtual CPUs (cores and sockets) will usually provide a +performance improvement though that is heavily dependent on the use of the VM. +Multi-threaded applications will of course benefit from a large number of +virtual CPUs, as for each virtual cpu you add, Qemu will create a new thread of +execution on the host system. If you’re not sure about the workload of your VM, +it is usually a safe bet to set the number of Total cores to 2. +It is perfectly safe if the overall number of cores of all your VMs +is greater than the number of cores on the server (for example, 4 VMs each with +4 cores (= total 16) on a machine with only 8 cores). In that case the host +system will balance the QEMU execution threads between your server cores, just +like if you were running a standard multi-threaded application. However, Proxmox VE +will prevent you from starting VMs with more virtual CPU cores than physically +available, as this will only bring the performance down due to the cost of +context switches. +Resource Limits +In addition to the number of virtual cores, you can configure how much resources +a VM can get in relation to the host CPU time and also in relation to other +VMs. +With the cpulimit (“Host CPU Time”) option you can limit how much CPU time +the whole VM can use on the host. It is a floating point value representing CPU +time in percent, so 1.0 is equal to 100%, 2.5 to 250% and so on. If a +single process would fully use one single core it would have 100% CPU Time +usage. If a VM with four cores utilizes all its cores fully it would +theoretically use 400%. In reality the usage may be even a bit higher as Qemu +can have additional threads for VM peripherals besides the vCPU core ones. +This setting can be useful if a VM should have multiple vCPUs, as it runs a few +processes in parallel, but the VM as a whole should not be able to run all +vCPUs at 100% at the same time. Using a specific example: lets say we have a VM +which would profit from having 8 vCPUs, but at no time all of those 8 cores +should run at full load - as this would make the server so overloaded that +other VMs and CTs would get to less CPU. So, we set the cpulimit limit to +4.0 (=400%). If all cores do the same heavy work they would all get 50% of a +real host cores CPU time. But, if only 4 would do work they could still get +almost 100% of a real core each. +VMs can, depending on their configuration, use additional threads, such +as for networking or IO operations but also live migration. Thus a VM can show +up to use more CPU time than just its virtual CPUs could use. To ensure that a +VM never uses more CPU time than virtual CPUs assigned set the cpulimit +setting to the same value as the total core count. +The second CPU resource limiting setting, cpuunits (nowadays often called CPU +shares or CPU weight), controls how much CPU time a VM gets compared to other +running VMs. It is a relative weight which defaults to 100 (or 1024 if the +host uses legacy cgroup v1). If you increase this for a VM it will be +prioritized by the scheduler in comparison to other VMs with lower weight. For +example, if VM 100 has set the default 100 and VM 200 was changed to 200, +the latter VM 200 would receive twice the CPU bandwidth than the first VM 100. +For more information see man systemd.resource-control, here CPUQuota +corresponds to cpulimit and CPUWeight corresponds to our cpuunits +setting, visit its Notes section for references and implementation details. +The third CPU resource limiting setting, affinity, controls what host cores +the virtual machine will be permitted to execute on. E.g., if an affinity value +of 0-3,8-11 is provided, the virtual machine will be restricted to using the +host cores 0,1,2,3,8,9,10, and 11. Valid affinity values are written in +cpuset List Format. List Format is a comma-separated list of CPU numbers and +ranges of numbers, in ASCII decimal. +CPU affinity uses the taskset command to restrict virtual machines to +a given set of cores. This restriction will not take effect for some types of +processes that may be created for IO. CPU affinity is not a security feature. +For more information regarding affinity see man cpuset. Here the +List Format corresponds to valid affinity values. Visit its Formats +section for more examples. +CPU Type +Qemu can emulate a number different of CPU types from 486 to the latest Xeon +processors. Each new processor generation adds new features, like hardware +assisted 3d rendering, random number generation, memory protection, etc … +Usually you should select for your VM a processor type which closely matches the +CPU of the host system, as it means that the host CPU features (also called CPU +flags ) will be available in your VMs. If you want an exact match, you can set +the CPU type to host in which case the VM will have exactly the same CPU flags +as your host system. +This has a downside though. If you want to do a live migration of VMs between +different hosts, your VM might end up on a new system with a different CPU type. +If the CPU flags passed to the guest are missing, the qemu process will stop. To +remedy this Qemu has also its own CPU type kvm64, that Proxmox VE uses by defaults. +kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set, +but is guaranteed to work everywhere. +In short, if you care about live migration and moving VMs between nodes, leave +the kvm64 default. If you don’t care about live migration or have a homogeneous +cluster where all nodes have the same CPU, set the CPU type to host, as in +theory this will give your guests maximum performance. +Custom CPU Types +You can specify custom CPU types with a configurable set of features. These are +maintained in the configuration file /etc/pve/virtual-guest/cpu-models.conf by +an administrator. See man cpu-models.conf for format details. +Specified custom types can be selected by any user with the Sys.Audit +privilege on /nodes. When configuring a custom CPU type for a VM via the CLI +or API, the name needs to be prefixed with custom-. +Meltdown / Spectre related CPU flags +There are several CPU flags related to the Meltdown and Spectre vulnerabilities +[Meltdown Attack https://meltdownattack.com/] which need to be set +manually unless the selected CPU type of your VM already enables them by default. +There are two requirements that need to be fulfilled in order to use these +CPU flags: +The host CPU(s) must support the feature and propagate it to the guest’s virtual CPU(s) +The guest operating system must be updated to a version which mitigates the + attacks and is able to utilize the CPU feature +Otherwise you need to set the desired CPU flag of the virtual CPU, either by +editing the CPU options in the WebUI, or by setting the flags property of the +cpu option in the VM configuration file. +For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a +so-called “microcode update” [You can use ‘intel-microcode’ / +‘amd-microcode’ from Debian non-free if your vendor does not provide such an +update. Note that not all affected CPUs can be updated to support spec-ctrl.] +for your CPU. +To check if the Proxmox VE host is vulnerable, execute the following command as root: +for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done +A community script is also available to detect is the host is still vulnerable. +[spectre-meltdown-checker https://meltdown.ovh/] +Intel processors +pcid +This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation +called Kernel Page-Table Isolation (KPTI), which effectively hides +the Kernel memory from the user space. Without PCID, KPTI is quite an expensive +mechanism [PCID is now a critical performance/security feature on x86 +https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU]. +To check if the Proxmox VE host supports PCID, execute the following command as root: +# grep ' pcid ' /proc/cpuinfo +If this does not return empty your host’s CPU has support for pcid. +spec-ctrl +Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix, +in cases where retpolines are not sufficient. +Included by default in Intel CPU models with -IBRS suffix. +Must be explicitly turned on for Intel CPU models without -IBRS suffix. +Requires an updated host CPU microcode (intel-microcode >= 20180425). +ssbd +Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model. +Must be explicitly turned on for all Intel CPU models. +Requires an updated host CPU microcode(intel-microcode >= 20180703). +AMD processors +ibpb +Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix, +in cases where retpolines are not sufficient. +Included by default in AMD CPU models with -IBPB suffix. +Must be explicitly turned on for AMD CPU models without -IBPB suffix. +Requires the host CPU microcode to support this feature before it can be used for guest CPUs. +virt-ssbd +Required to enable the Spectre v4 (CVE-2018-3639) fix. +Not included by default in any AMD CPU model. +Must be explicitly turned on for all AMD CPU models. +This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility. +Note that this must be explicitly enabled when when using the "host" cpu model, +because this is a virtual feature which does not exist in the physical CPUs. +amd-ssbd +Required to enable the Spectre v4 (CVE-2018-3639) fix. +Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models. +This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible. +virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd. +amd-no-ssb +Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639). +Not included by default in any AMD CPU model. +Future hardware generations of CPU will not be vulnerable to CVE-2018-3639, +and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb. +This is mutually exclusive with virt-ssbd and amd-ssbd. +NUMA +You can also optionally emulate a NUMA +[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture +in your VMs. The basics of the NUMA architecture mean that instead of having a +global memory pool available to all your cores, the memory is spread into local +banks close to each socket. +This can bring speed improvements as the memory bus is not a bottleneck +anymore. If your system has a NUMA architecture [if the command +numactl --hardware | grep available returns more than one node, then your host +system has a NUMA architecture] we recommend to activate the option, as this +will allow proper distribution of the VM resources on the host system. +This option is also required to hot-plug cores or RAM in a VM. +If the NUMA option is used, it is recommended to set the number of sockets to +the number of nodes of the host system. +vCPU hot-plug +Modern operating systems introduced the capability to hot-plug and, to a +certain extent, hot-unplug CPUs in a running system. Virtualization allows us +to avoid a lot of the (physical) problems real hardware can cause in such +scenarios. +Still, this is a rather new and complicated feature, so its use should be +restricted to cases where its absolutely needed. Most of the functionality can +be replicated with other, well tested and less complicated, features, see +Resource Limits. +In Proxmox VE the maximal number of plugged CPUs is always cores * sockets. +To start a VM with less than this total core count of CPUs you may use the +vpus setting, it denotes how many vCPUs should be plugged in at VM start. +Currently only this feature is only supported on Linux, a kernel newer than 3.10 +is needed, a kernel newer than 4.7 is recommended. +You can use a udev rule as follow to automatically set new CPUs as online in +the guest: +SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1" +Save this under /etc/udev/rules.d/ as a file ending in .rules. +Note: CPU hot-remove is machine dependent and requires guest cooperation. The +deletion command does not guarantee CPU removal to actually happen, typically +it’s a request forwarded to guest OS using target dependent mechanism, such as +ACPI on x86/amd64. +Memory +For each VM you have the option to set a fixed size memory or asking +Proxmox VE to dynamically allocate memory based on the current RAM usage of the +host. +Fixed Memory Allocation +When setting memory and minimum memory to the same amount +Proxmox VE will simply allocate what you specify to your VM. +Even when using a fixed memory size, the ballooning device gets added to the +VM, because it delivers useful information such as how much memory the guest +really uses. +In general, you should leave ballooning enabled, but if you want to disable +it (like for debugging purposes), simply uncheck Ballooning Device or set +balloon: 0 +in the configuration. +Automatic Memory Allocation +When setting the minimum memory lower than memory, Proxmox VE will make sure that the +minimum amount you specified is always available to the VM, and if RAM usage on +the host is below 80%, will dynamically add memory to the guest up to the +maximum memory specified. +When the host is running low on RAM, the VM will then release some memory +back to the host, swapping running processes if needed and starting the oom +killer in last resort. The passing around of memory between host and guest is +done via a special balloon kernel driver running inside the guest, which will +grab or release memory pages from the host. +[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/] +When multiple VMs use the autoallocate facility, it is possible to set a +Shares coefficient which indicates the relative amount of the free host memory +that each VM should take. Suppose for instance you have four VMs, three of them +running an HTTP server and the last one is a database server. To cache more +database blocks in the database server RAM, you would like to prioritize the +database VM when spare RAM is available. For this you assign a Shares property +of 3000 to the database VM, leaving the other VMs to the Shares default setting +of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32 +* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 * +3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will +get 1.5 GB. +All Linux distributions released after 2010 have the balloon kernel driver +included. For Windows OSes, the balloon driver needs to be added manually and can +incur a slowdown of the guest, so we don’t recommend using it on critical +systems. +When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB +of RAM available to the host. +Network Device +Each VM can have many Network interface controllers (NIC), of four different +types: +Intel E1000 is the default, and emulates an Intel Gigabit network card. +the VirtIO paravirtualized NIC should be used if you aim for maximum +performance. Like all VirtIO devices, the guest OS should have the proper driver +installed. +the Realtek 8139 emulates an older 100 MB/s network card, and should +only be used when emulating older operating systems ( released before 2002 ) +the vmxnet3 is another paravirtualized device, which should only be used +when importing a VM from another hypervisor. +Proxmox VE will generate for each NIC a random MAC address, so that your VM is +addressable on Ethernet networks. +The NIC you added to the VM can follow one of two different models: +in the default Bridged mode each virtual NIC is backed on the host by a +tap device, ( a software loopback device simulating an Ethernet NIC ). This +tap device is added to a bridge, by default vmbr0 in Proxmox VE. In this mode, VMs +have direct access to the Ethernet LAN on which the host is located. +in the alternative NAT mode, each virtual NIC will only communicate with +the Qemu user networking stack, where a built-in router and DHCP server can +provide network access. This built-in DHCP will serve addresses in the private +10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and +should only be used for testing. This mode is only available via CLI or the API, +but not via the WebUI. +You can also skip adding a network device when creating a VM by selecting No +network device. +You can overwrite the MTU setting for each VM network device. The option +mtu=1 represents a special case, in which the MTU value will be inherited +from the underlying bridge. +This option is only available for VirtIO network devices. +Multiqueue +If you are using the VirtIO driver, you can optionally activate the +Multiqueue option. This option allows the guest OS to process networking +packets using multiple virtual CPUs, providing an increase in the total number +of packets transferred. +When using the VirtIO driver with Proxmox VE, each NIC network queue is passed to the +host kernel, where the queue will be processed by a kernel thread spawned by the +vhost driver. With this option activated, it is possible to pass multiple +network queues to the host kernel for each NIC. +When using Multiqueue, it is recommended to set it to a value equal +to the number of Total Cores of your guest. You also need to set in +the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool +command: +ethtool -L ens1 combined X +where X is the number of the number of vcpus of the VM. +You should note that setting the Multiqueue parameter to a value greater +than one will increase the CPU load on the host and guest systems as the +traffic increases. We recommend to set this option only when the VM has to +process a great number of incoming connections, such as when the VM is running +as a router, reverse proxy or a busy HTTP server doing long polling. +Display +QEMU can virtualize a few types of VGA hardware. Some examples are: +std, the default, emulates a card with Bochs VBE extensions. +cirrus, this was once the default, it emulates a very old hardware module +with all its problems. This display type should only be used if really +necessary [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ +qemu: using cirrus considered harmful], for example, if using Windows XP or +earlier +vmware, is a VMWare SVGA-II compatible adapter. +qxl, is the QXL paravirtualized graphics card. Selecting this also +enables SPICE (a remote viewer protocol) for the +VM. +virtio-gl, often named VirGL is a virtual 3D GPU for use inside VMs that + can offload workloads to the host GPU without requiring special (expensive) + models and drivers and neither binding the host GPU completely, allowing + reuse between multiple guests and or the host. +VirGL support needs some extra libraries that aren’t installed by +default due to being relatively big and also not available as open source for +all GPU models/vendors. For most setups you’ll just need to do: +apt install libgl1 libegl1 +You can edit the amount of memory given to the virtual GPU, by setting +the memory option. This can enable higher resolutions inside the VM, +especially with SPICE/QXL. +As the memory is reserved by display device, selecting Multi-Monitor mode +for SPICE (such as qxl2 for dual monitors) has some implications: +Windows needs a device for each monitor, so if your ostype is some +version of Windows, Proxmox VE gives the VM an extra device per monitor. +Each device gets the specified amount of memory. +Linux VMs, can always enable more virtual monitors, but selecting +a Multi-Monitor mode multiplies the memory given to the device with +the number of monitors. +Selecting serialX as display type disables the VGA output, and redirects +the Web Console to the selected serial port. A configured display memory +setting will be ignored in that case. +USB Passthrough +There are two different types of USB passthrough devices: +Host USB passthrough +SPICE USB passthrough +Host USB passthrough works by giving a VM a USB device of the host. +This can either be done via the vendor- and product-id, or +via the host bus and port. +The vendor/product-id looks like this: 0123:abcd, +where 0123 is the id of the vendor, and abcd is the id +of the product, meaning two pieces of the same usb device +have the same id. +The bus/port looks like this: 1-2.3.4, where 1 is the bus +and 2.3.4 is the port path. This represents the physical +ports of your host (depending of the internal order of the +usb controllers). +If a device is present in a VM configuration when the VM starts up, +but the device is not present in the host, the VM can boot without problems. +As soon as the device/port is available in the host, it gets passed through. +Using this kind of USB passthrough means that you cannot move +a VM online to another host, since the hardware is only available +on the host the VM is currently residing. +The second type of passthrough is SPICE USB passthrough. This is useful +if you use a SPICE client which supports it. If you add a SPICE USB port +to your VM, you can passthrough a USB device from where your SPICE client is, +directly to the VM (for example an input device or hardware dongle). +BIOS and UEFI +In order to properly emulate a computer, QEMU needs to use a firmware. +Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the +first steps when booting a VM. It is responsible for doing basic hardware +initialization and for providing an interface to the firmware and hardware for +the operating system. By default QEMU uses SeaBIOS for this, which is an +open-source, x86 BIOS implementation. SeaBIOS is a good choice for most +standard setups. +Some operating systems (such as Windows 11) may require use of an UEFI +compatible implementation instead. In such cases, you must rather use OVMF, +which is an open-source UEFI implementation. [See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF] +There are other scenarios in which the SeaBIOS may not be the ideal firmware to +boot from, for example if you want to do VGA passthrough. [Alex +Williamson has a good blog entry about this +https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html] +If you want to use OVMF, there are several things to consider: +In order to save things like the boot order, there needs to be an EFI Disk. +This disk will be included in backups and snapshots, and there can only be one. +You can create such a disk with the following command: +# qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1 +Where <storage> is the storage where you want to have the disk, and +<format> is a format which the storage supports. Alternatively, you can +create such a disk through the web interface with Add → EFI Disk in the +hardware section of a VM. +The efitype option specifies which version of the OVMF firmware should be +used. For new VMs, this should always be 4m, as it supports Secure Boot and +has more space allocated to support future development (this is the default in +the GUI). +pre-enroll-keys specifies if the efidisk should come pre-loaded with +distribution-specific and Microsoft Standard Secure Boot keys. It also enables +Secure Boot by default (though it can still be disabled in the OVMF menu within +the VM). +If you want to start using Secure Boot in an existing VM (that still uses +a 2m efidisk), you need to recreate the efidisk. To do so, delete the old one +(qm set <vmid> -delete efidisk0) and add a new one as described above. This +will reset any custom configurations you have made in the OVMF menu! +When using OVMF with a virtual display (without VGA passthrough), +you need to set the client resolution in the OVMF menu (which you can reach +with a press of the ESC button during boot), or you have to choose +SPICE as the display type. +Trusted Platform Module (TPM) +A Trusted Platform Module is a device which stores secret data - such as +encryption keys - securely and provides tamper-resistance functions for +validating system boot. +Certain operating systems (such as Windows 11) require such a device to be +attached to a machine (be it physical or virtual). +A TPM is added by specifying a tpmstate volume. This works similar to an +efidisk, in that it cannot be changed (only removed) once created. You can add +one via the following command: +# qm set <vmid> -tpmstate0 <storage>:1,version=<version> +Where <storage> is the storage you want to put the state on, and <version> +is either v1.2 or v2.0. You can also add one via the web interface, by +choosing Add → TPM State in the hardware section of a VM. +The v2.0 TPM spec is newer and better supported, so unless you have a specific +implementation that requires a v1.2 TPM, it should be preferred. +Compared to a physical TPM, an emulated one does not provide any real +security benefits. The point of a TPM is that the data on it cannot be modified +easily, except via commands specified as part of the TPM spec. Since with an +emulated device the data storage happens on a regular volume, it can potentially +be edited by anyone with access to it. +Inter-VM shared memory +You can add an Inter-VM shared memory device (ivshmem), which allows one to +share memory between the host and a guest, or also between multiple guests. +To add such a device, you can use qm: +# qm set <vmid> -ivshmem size=32,name=foo +Where the size is in MiB. The file will be located under +/dev/shm/pve-shm-$name (the default name is the vmid). +Currently the device will get deleted as soon as any VM using it got +shutdown or stopped. Open connections will still persist, but new connections +to the exact same device cannot be made anymore. +A use case for such a device is the Looking Glass +[Looking Glass: https://looking-glass.io/] project, which enables high +performance, low-latency display mirroring between host and guest. +Audio Device +To add an audio device run the following command: +qm set <vmid> -audio0 device=<device> +Supported audio devices are: +ich9-intel-hda: Intel HD Audio Controller, emulates ICH9 +intel-hda: Intel HD Audio Controller, emulates ICH6 +AC97: Audio Codec '97, useful for older operating systems like Windows XP +There are two backends available: +spice +none +The spice backend can be used in combination with SPICE while +the none backend can be useful if an audio device is needed in the VM for some +software to work. To use the physical audio device of the host use device +passthrough (see PCI Passthrough and +USB Passthrough). Remote protocols like Microsoft’s RDP +have options to play sound. +VirtIO RNG +A RNG (Random Number Generator) is a device providing entropy (randomness) to +a system. A virtual hardware-RNG can be used to provide such entropy from the +host system to a guest VM. This helps to avoid entropy starvation problems in +the guest (a situation where not enough entropy is available and the system may +slow down or run into problems), especially during the guests boot process. +To add a VirtIO-based emulated RNG, run the following command: +qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y] +source specifies where entropy is read from on the host and has to be one of +the following: +/dev/urandom: Non-blocking kernel entropy pool (preferred) +/dev/random: Blocking kernel pool (not recommended, can lead to entropy + starvation on the host system) +/dev/hwrng: To pass through a hardware RNG attached to the host (if multiple + are available, the one selected in + /sys/devices/virtual/misc/hw_random/rng_current will be used) +A limit can be specified via the max_bytes and period parameters, they are +read as max_bytes per period in milliseconds. However, it does not represent +a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes +available on a 1 second timer, not that 1 KiB is streamed to the guest over the +course of one second. Reducing the period can thus be used to inject entropy +into the guest at a faster rate. +By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is +recommended to always use a limiter to avoid guests using too many host +resources. If desired, a value of 0 for max_bytes can be used to disable +all limits. +Device Boot Order +QEMU can tell the guest which devices it should boot from, and in which order. +This can be specified in the config via the boot property, for example: +boot: order=scsi0;net0;hostpci0 +This way, the guest would first attempt to boot from the disk scsi0, if that +fails, it would go on to attempt network boot from net0, and in case that +fails too, finally attempt to boot from a passed through PCIe device (seen as +disk in case of NVMe, otherwise tries to launch into an option ROM). +On the GUI you can use a drag-and-drop editor to specify the boot order, and use +the checkbox to enable or disable certain devices for booting altogether. +If your guest uses multiple disks to boot the OS or load the bootloader, +all of them must be marked as bootable (that is, they must have the checkbox +enabled or appear in the list in the config) for the guest to be able to boot. +This is because recent SeaBIOS and OVMF versions only initialize disks if they +are marked bootable. +In any case, even devices not appearing in the list or having the checkmark +disabled will still be available to the guest, once it’s operating system has +booted and initialized them. The bootable flag only affects the guest BIOS and +bootloader. +Automatic Start and Shutdown of Virtual Machines +After creating your VMs, you probably want them to start automatically +when the host system boots. For this you need to select the option Start at +boot from the Options Tab of your VM in the web interface, or set it with +the following command: +# qm set <vmid> -onboot 1 +Start and Shutdown Order +In some case you want to be able to fine tune the boot order of your +VMs, for instance if one of your VM is providing firewalling or DHCP +to other guest systems. For this you can use the following +parameters: +Start/Shutdown order: Defines the start order priority. For example, set it +to 1 if +you want the VM to be the first to be started. (We use the reverse startup +order for shutdown, so a machine with a start order of 1 would be the last to +be shut down). If multiple VMs have the same order defined on a host, they will +additionally be ordered by VMID in ascending order. +Startup delay: Defines the interval between this VM start and subsequent +VMs starts. For example, set it to 240 if you want to wait 240 seconds before +starting other VMs. +Shutdown timeout: Defines the duration in seconds Proxmox VE should wait +for the VM to be offline after issuing a shutdown command. By default this +value is set to 180, which means that Proxmox VE will issue a shutdown request and +wait 180 seconds for the machine to be offline. If the machine is still online +after the timeout it will be stopped forcefully. +VMs managed by the HA stack do not follow the start on boot and +boot order options currently. Those VMs will be skipped by the startup and +shutdown algorithm as the HA manager itself ensures that VMs get started and +stopped. +Please note that machines without a Start/Shutdown order parameter will always +start after those where the parameter is set. Further, this parameter can only +be enforced between virtual machines running on the same host, not +cluster-wide. +If you require a delay between the host boot and the booting of the first VM, +see the section on Proxmox VE Node Management. +Qemu Guest Agent +The Qemu Guest Agent is a service which runs inside the VM, providing a +communication channel between the host and the guest. It is used to exchange +information and allows the host to issue commands to the guest. +For example, the IP addresses in the VM summary panel are fetched via the guest +agent. +Or when starting a backup, the guest is told via the guest agent to sync +outstanding writes via the fs-freeze and fs-thaw commands. +For the guest agent to work properly the following steps must be taken: +install the agent in the guest and make sure it is running +enable the communication via the agent in Proxmox VE +Install Guest Agent +For most Linux distributions, the guest agent is available. The package is +usually named qemu-guest-agent. +For Windows, it can be installed from the +Fedora +VirtIO driver ISO. +Enable Guest Agent Communication +Communication from Proxmox VE with the guest agent can be enabled in the VM’s +Options panel. A fresh start of the VM is necessary for the changes to take +effect. +It is possible to enable the Run guest-trim option. With this enabled, +Proxmox VE will issue a trim command to the guest after the following +operations that have the potential to write out zeros to the storage: +moving a disk to another storage +live migrating a VM to another node with local storage +On a thin provisioned storage, this can help to free up unused space. +Troubleshooting +VM does not shut down +Make sure the guest agent is installed and running. +Once the guest agent is enabled, Proxmox VE will send power commands like +shutdown via the guest agent. If the guest agent is not running, commands +cannot get executed properly and the shutdown command will run into a timeout. +SPICE Enhancements +SPICE Enhancements are optional features that can improve the remote viewer +experience. +To enable them via the GUI go to the Options panel of the virtual machine. Run +the following command to enable them via the CLI: +qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all +To use these features the Display of the virtual machine +must be set to SPICE (qxl). +Folder Sharing +Share a local folder with the guest. The spice-webdavd daemon needs to be +installed in the guest. It makes the shared folder available through a local +WebDAV server located at http://localhost:9843. +For Windows guests the installer for the Spice WebDAV daemon can be downloaded +from the +official SPICE website. +Most Linux distributions have a package called spice-webdavd that can be +installed. +To share a folder in Virt-Viewer (Remote Viewer) go to File → Preferences. +Select the folder to share and then enable the checkbox. +Folder sharing currently only works in the Linux version of Virt-Viewer. +Experimental! Currently this feature does not work reliably. +Video Streaming +Fast refreshing areas are encoded into a video stream. Two options exist: +all: Any fast refreshing area will be encoded into a video stream. +filter: Additional filters are used to decide if video streaming should be + used (currently only small window surfaces are skipped). +A general recommendation if video streaming should be enabled and which option +to choose from cannot be given. Your mileage may vary depending on the specific +circumstances. +Troubleshooting +Shared folder does not show up +Make sure the WebDAV service is enabled and running in the guest. On Windows it +is called Spice webdav proxy. In Linux the name is spice-webdavd but can be +different depending on the distribution. +If the service is running, check the WebDAV server by opening +http://localhost:9843 in a browser in the guest. +It can help to restart the SPICE session. +Migration +If you have a cluster, you can migrate your VM to another host with +# qm migrate <vmid> <target> +There are generally two mechanisms for this +Online Migration (aka Live Migration) +Offline Migration +Online Migration +If your VM is running and no locally bound resources are configured (such as +passed-through devices), you can initiate a live migration with the --online +flag in the qm migration command evocation. The web-interface defaults to +live migration when the VM is running. +How it works +Online migration first starts a new QEMU process on the target host with the +incoming flag, which performs only basic initialization with the guest vCPUs +still paused and then waits for the guest memory and device state data streams +of the source Virtual Machine. +All other resources, such as disks, are either shared or got already sent +before runtime state migration of the VMs begins; so only the memory content +and device state remain to be transferred. +Once this connection is established, the source begins asynchronously sending +the memory content to the target. If the guest memory on the source changes, +those sections are marked dirty and another pass is made to send the guest +memory data. +This loop is repeated until the data difference between running source VM +and incoming target VM is small enough to be sent in a few milliseconds, +because then the source VM can be paused completely, without a user or program +noticing the pause, so that the remaining data can be sent to the target, and +then unpause the targets VM’s CPU to make it the new running VM in well under a +second. +Requirements +For Live Migration to work, there are some things required: +The VM has no local resources that cannot be migrated. For example, + PCI or USB devices that are passed through currently block live-migration. + Local Disks, on the other hand, can be migrated by sending them to the target + just fine. +The hosts are located in the same Proxmox VE cluster. +The hosts have a working (and reliable) network connection between them. +The target host must have the same, or higher versions of the + Proxmox VE packages. Although it can sometimes work the other way around, this + cannot be guaranteed. +The hosts have CPUs from the same vendor with similar capabilities. Different + vendor might work depending on the actual models and VMs CPU type + configured, but it cannot be guaranteed - so please test before deploying + such a setup in production. +Offline Migration +If you have local resources, you can still migrate your VMs offline as long as +all disk are on storage defined on both hosts. +Migration then copies the disks to the target host over the network, as with +online migration. Note that any hardware pass-through configuration may need to +be adapted to the device location on the target host. +Copies and Clones +VM installation is usually done using an installation media (CD-ROM) +from the operating system vendor. Depending on the OS, this can be a +time consuming task one might want to avoid. +An easy way to deploy many VMs of the same type is to copy an existing +VM. We use the term clone for such copies, and distinguish between +linked and full clones. +Full Clone +The result of such copy is an independent VM. The +new VM does not share any storage resources with the original. +It is possible to select a Target Storage, so one can use this to +migrate a VM to a totally different storage. You can also change the +disk image Format if the storage driver supports several formats. +A full clone needs to read and copy all VM image data. This is +usually much slower than creating a linked clone. +Some storage types allows to copy a specific Snapshot, which +defaults to the current VM data. This also means that the final copy +never includes any additional snapshots from the original VM. +Linked Clone +Modern storage drivers support a way to generate fast linked +clones. Such a clone is a writable copy whose initial contents are the +same as the original data. Creating a linked clone is nearly +instantaneous, and initially consumes no additional space. +They are called linked because the new image still refers to the +original. Unmodified data blocks are read from the original image, but +modification are written (and afterwards read) from a new +location. This technique is called Copy-on-write. +This requires that the original volume is read-only. With Proxmox VE one +can convert any VM into a read-only Template). Such +templates can later be used to create linked clones efficiently. +You cannot delete an original template while linked clones +exist. +It is not possible to change the Target storage for linked clones, +because this is a storage internal feature. +The Target node option allows you to create the new VM on a +different node. The only restriction is that the VM is on shared +storage, and that storage is also available on the target node. +To avoid resource conflicts, all network interface MAC addresses get +randomized, and we generate a new UUID for the VM BIOS (smbios1) +setting. +Virtual Machine Templates +One can convert a VM into a Template. Such templates are read-only, +and you can use them to create linked clones. +It is not possible to start templates, because this would modify +the disk images. If you want to change the template, create a linked +clone and modify that. +VM Generation ID +Proxmox VE supports Virtual Machine Generation ID (vmgenid) [Official +vmgenid Specification +https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier] +for virtual machines. +This can be used by the guest operating system to detect any event resulting +in a time shift event, for example, restoring a backup or a snapshot rollback. +When creating new VMs, a vmgenid will be automatically generated and saved +in its configuration file. +To create and add a vmgenid to an already existing VM one can pass the +special value ‘1’ to let Proxmox VE autogenerate one or manually set the UUID +[Online GUID generator http://guid.one/] by using it as value, for +example: +# qm set VMID -vmgenid 1 +# qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000 +The initial addition of a vmgenid device to an existing VM, may result +in the same effects as a change on snapshot rollback, backup restore, etc., has +as the VM can interpret this as generation change. +In the rare case the vmgenid mechanism is not wanted one can pass ‘0’ for +its value on VM creation, or retroactively delete the property in the +configuration with: +# qm set VMID -delete vmgenid +The most prominent use case for vmgenid are newer Microsoft Windows +operating systems, which use it to avoid problems in time sensitive or +replicate services (such as databases or domain controller +[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture]) +on snapshot rollback, backup restore or a whole VM clone operation. +Importing Virtual Machines and disk images +A VM export from a foreign hypervisor takes usually the form of one or more disk + images, with a configuration file describing the settings of the VM (RAM, + number of cores). +The disk images can be in the vmdk format, if the disks come from +VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor. +The most popular configuration format for VM exports is the OVF standard, but in +practice interoperation is limited because many settings are not implemented in +the standard itself, and hypervisors export the supplementary information +in non-standard extensions. +Besides the problem of format, importing disk images from other hypervisors +may fail if the emulated hardware changes too much from one hypervisor to +another. Windows VMs are particularly concerned by this, as the OS is very +picky about any changes of hardware. This problem may be solved by +installing the MergeIDE.zip utility available from the Internet before exporting +and choosing a hard disk type of IDE before booting the imported Windows VM. +Finally there is the question of paravirtualized drivers, which improve the +speed of the emulated system and are specific to the hypervisor. +GNU/Linux and other free Unix OSes have all the necessary drivers installed by +default and you can switch to the paravirtualized drivers right after importing +the VM. For Windows VMs, you need to install the Windows paravirtualized +drivers by yourself. +GNU/Linux and other free Unix can usually be imported without hassle. Note +that we cannot guarantee a successful import/export of Windows VMs in all +cases due to the problems above. +Step-by-step example of a Windows OVF import +Microsoft provides +Virtual Machines downloads + to get started with Windows development.We are going to use one of these +to demonstrate the OVF import feature. +Download the Virtual Machine zip +After getting informed about the user agreement, choose the Windows 10 +Enterprise (Evaluation - Build) for the VMware platform, and download the zip. +Extract the disk image from the zip +Using the unzip utility or any archiver of your choice, unpack the zip, +and copy via ssh/scp the ovf and vmdk files to your Proxmox VE host. +Import the Virtual Machine +This will create a new virtual machine, using cores, memory and +VM name as read from the OVF manifest, and import the disks to the local-lvm + storage. You have to configure the network manually. +# qm importovf 999 WinDev1709Eval.ovf local-lvm +The VM is ready to be started. +Adding an external disk image to a Virtual Machine +You can also add an existing disk image to a VM, either coming from a +foreign hypervisor, or one that you created yourself. +Suppose you created a Debian/Ubuntu disk image with the vmdebootstrap tool: +vmdebootstrap --verbose \ + --size 10GiB --serial-console \ + --grub --no-extlinux \ + --package openssh-server \ + --package avahi-daemon \ + --package qemu-guest-agent \ + --hostname vm600 --enable-dhcp \ + --customize=./copy_pub_ssh.sh \ + --sparse --image vm600.raw +You can now create a new target VM, importing the image to the storage pvedir +and attaching it to the VM’s SCSI controller: +# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \ + --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \ + --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw +The VM is ready to be started. +Hookscripts +You can add a hook script to VMs with the config property hookscript. +# qm set 100 --hookscript local:snippets/hookscript.pl +It will be called during various phases of the guests lifetime. +For an example and documentation see the example script under +/usr/share/pve-docs/examples/guest-example-hookscript.pl. +Hibernation +You can suspend a VM to disk with the GUI option Hibernate or with +# qm suspend ID --todisk +That means that the current content of the memory will be saved onto disk +and the VM gets stopped. On the next start, the memory content will be +loaded and the VM can continue where it was left off. +State storage selection +If no target storage for the memory is given, it will be automatically +chosen, the first of: +The storage vmstatestorage from the VM config. +The first shared storage from any VM disk. +The first non-shared storage from any VM disk. +The storage local as a fallback. +Managing Virtual Machines with qm +qm is the tool to manage Qemu/Kvm virtual machines on Proxmox VE. You can +create and destroy virtual machines, and control execution +(start/stop/suspend/resume). Besides that, you can use qm to set +parameters in the associated config file. It is also possible to +create and delete virtual disks. +CLI Usage Examples +Using an iso file uploaded on the local storage, create a VM +with a 4 GB IDE disk on the local-lvm storage +# qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso +Start the new VM +# qm start 300 +Send a shutdown request, then wait until the VM is stopped. +# qm shutdown 300 && qm wait 300 +Same as above, but only wait for 40 seconds. +# qm shutdown 300 && qm wait 300 -timeout 40 +Destroying a VM always removes it from Access Control Lists and it always +removes the firewall configuration of the VM. You have to activate +--purge, if you want to additionally remove the VM from replication jobs, +backup jobs and HA resource configurations. +# qm destroy 300 --purge +Move a disk image to a different storage. +# qm move-disk 300 scsi0 other-storage +Reassign a disk image to a different VM. This will remove the disk scsi1 from +the source VM and attaches it as scsi3 to the target VM. In the background +the disk image is being renamed so that the name matches the new owner. +# qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3 +Configuration +VM configuration files are stored inside the Proxmox cluster file +system, and can be accessed at /etc/pve/qemu-server/<VMID>.conf. +Like other files stored inside /etc/pve/, they get automatically +replicated to all other cluster nodes. +VMIDs < 100 are reserved for internal purposes, and VMIDs need to be +unique cluster wide. +Example VM Configuration +boot: order=virtio0;net0 +cores: 1 +sockets: 1 +memory: 512 +name: webmail +ostype: l26 +net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0 +virtio0: local:vm-100-disk-1,size=32G +Those configuration files are simple text files, and you can edit them +using a normal text editor (vi, nano, …). This is sometimes +useful to do small corrections, but keep in mind that you need to +restart the VM to apply such changes. +For that reason, it is usually better to use the qm command to +generate and modify those files, or do the whole thing using the GUI. +Our toolkit is smart enough to instantaneously apply most changes to +running VM. This feature is called "hot plug", and there is no +need to restart the VM in that case. +File Format +VM configuration files use a simple colon separated key/value +format. Each line has the following format: +# this is a comment +OPTION: value +Blank lines in those files are ignored, and lines starting with a # +character are treated as comments and are also ignored. +Snapshots +When you create a snapshot, qm stores the configuration at snapshot +time into a separate snapshot section within the same configuration +file. For example, after creating a snapshot called “testsnapshot”, +your configuration file will look like this: +VM configuration with snapshot +memory: 512 +swap: 512 +parent: testsnaphot +... +[testsnaphot] +memory: 512 +swap: 512 +snaptime: 1457170803 +... +There are a few snapshot related properties like parent and +snaptime. The parent property is used to store the parent/child +relationship between snapshots. snaptime is the snapshot creation +time stamp (Unix epoch). +You can optionally save the memory of a running VM with the option vmstate. +For details about how the target storage gets chosen for the VM state, see +State storage selection in the chapter +Hibernation. +Options +acpi: <boolean> (default = 1) +Enable/disable ACPI. +affinity: <string> +List of host cores used to execute guest processes, for example: 0,5,8-11 +agent: [enabled=]<1|0> [,fstrim_cloned_disks=<1|0>] [,type=<virtio|isa>] +Enable/disable communication with the Qemu Guest Agent and its properties. +enabled=<boolean> (default = 0) +Enable/disable communication with a Qemu Guest Agent (QGA) running in the VM. +fstrim_cloned_disks=<boolean> (default = 0) +Run fstrim after moving a disk or migrating the VM. +type=<isa | virtio> (default = virtio) +Select the agent type +arch: <aarch64 | x86_64> +Virtual processor architecture. Defaults to the host. +args: <string> +Arbitrary arguments passed to kvm, for example: +args: -no-reboot -no-hpet +this option is for experts only. +audio0: device=<ich9-intel-hda|intel-hda|AC97> [,driver=<spice|none>] +Configure a audio device, useful in combination with QXL/Spice. +device=<AC97 | ich9-intel-hda | intel-hda> +Configure an audio device. +driver=<none | spice> (default = spice) +Driver backend for the audio device. +autostart: <boolean> (default = 0) +Automatic restart after crash (currently ignored). +balloon: <integer> (0 - N) +Amount of target RAM for the VM in MB. Using zero disables the ballon driver. +bios: <ovmf | seabios> (default = seabios) +Select BIOS implementation. +boot: [[legacy=]<[acdn]{1,4}>] [,order=<device[;device...]>] +Specify guest boot order. Use the order= sub-property as usage with no key or legacy= is deprecated. +legacy=<[acdn]{1,4}> (default = cdn) +Boot on floppy (a), hard disk (c), CD-ROM (d), or network (n). Deprecated, use order= instead. +order=<device[;device...]> +The guest will attempt to boot from devices in the order they appear here. +Disks, optical drives and passed-through storage USB devices will be directly +booted from, NICs will load PXE, and PCIe devices will either behave like disks +(e.g. NVMe) or load an option ROM (e.g. RAID controller, hardware NIC). +Note that only devices in this list will be marked as bootable and thus loaded +by the guest firmware (BIOS/UEFI). If you require multiple disks for booting +(e.g. software-raid), you need to specify all of them here. +Overrides the deprecated legacy=[acdn]* value when given. +bootdisk: (ide|sata|scsi|virtio)\d+ +Enable booting from specified disk. Deprecated: Use boot: order=foo;bar instead. +cdrom: <volume> +This is an alias for option -ide2 +cicustom: [meta=<volume>] [,network=<volume>] [,user=<volume>] [,vendor=<volume>] +cloud-init: Specify custom files to replace the automatically generated ones at start. +meta=<volume> +Specify a custom file containing all meta data passed to the VM via" + ." cloud-init. This is provider specific meaning configdrive2 and nocloud differ. +network=<volume> +Specify a custom file containing all network data passed to the VM via cloud-init. +user=<volume> +Specify a custom file containing all user data passed to the VM via cloud-init. +vendor=<volume> +Specify a custom file containing all vendor data passed to the VM via cloud-init. +cipassword: <string> +cloud-init: Password to assign the user. Using this is generally not recommended. Use ssh keys instead. Also note that older cloud-init versions do not support hashed passwords. +citype: <configdrive2 | nocloud | opennebula> +Specifies the cloud-init configuration format. The default depends on the configured operating system type (ostype. We use the nocloud format for Linux, and configdrive2 for windows. +ciuser: <string> +cloud-init: User name to change ssh keys and password for instead of the image’s configured default user. +cores: <integer> (1 - N) (default = 1) +The number of cores per socket. +cpu: [[cputype=]<string>] [,flags=<+FLAG[;-FLAG...]>] [,hidden=<1|0>] [,hv-vendor-id=<vendor-id>] [,phys-bits=<8-64|host>] [,reported-model=<enum>] +Emulated CPU type. +cputype=<string> (default = kvm64) +Emulated CPU type. Can be default or custom name (custom model names must be prefixed with custom-). +flags=<+FLAG[;-FLAG...]> +List of additional CPU flags separated by ;. Use +FLAG to enable, -FLAG to disable a flag. Custom CPU models can specify any flag supported by QEMU/KVM, VM-specific flags must be from the following set for security reasons: pcid, spec-ctrl, ibpb, ssbd, virt-ssbd, amd-ssbd, amd-no-ssb, pdpe1gb, md-clear, hv-tlbflush, hv-evmcs, aes +hidden=<boolean> (default = 0) +Do not identify as a KVM virtual machine. +hv-vendor-id=<vendor-id> +The Hyper-V vendor ID. Some drivers or programs inside Windows guests need a specific ID. +phys-bits=<8-64|host> +The physical memory address bits that are reported to the guest OS. Should be smaller or equal to the host’s. Set to host to use value from host CPU, but note that doing so will break live migration to CPUs with other values. +reported-model=<486 | Broadwell | Broadwell-IBRS | Broadwell-noTSX | Broadwell-noTSX-IBRS | Cascadelake-Server | Cascadelake-Server-noTSX | Conroe | EPYC | EPYC-IBPB | EPYC-Milan | EPYC-Rome | Haswell | Haswell-IBRS | Haswell-noTSX | Haswell-noTSX-IBRS | Icelake-Client | Icelake-Client-noTSX | Icelake-Server | Icelake-Server-noTSX | IvyBridge | IvyBridge-IBRS | KnightsMill | Nehalem | Nehalem-IBRS | Opteron_G1 | Opteron_G2 | Opteron_G3 | Opteron_G4 | Opteron_G5 | Penryn | SandyBridge | SandyBridge-IBRS | Skylake-Client | Skylake-Client-IBRS | Skylake-Client-noTSX-IBRS | Skylake-Server | Skylake-Server-IBRS | Skylake-Server-noTSX-IBRS | Westmere | Westmere-IBRS | athlon | core2duo | coreduo | host | kvm32 | kvm64 | max | pentium | pentium2 | pentium3 | phenom | qemu32 | qemu64> (default = kvm64) +CPU model and vendor to report to the guest. Must be a QEMU/KVM supported model. Only valid for custom CPU model definitions, default models will always report themselves to the guest OS. +cpulimit: <number> (0 - 128) (default = 0) +Limit of CPU usage. +If the computer has 2 CPUs, it has total of 2 CPU time. Value 0 indicates no CPU limit. +cpuunits: <integer> (1 - 262144) (default = cgroup v1: 1024, cgroup v2: 100) +CPU weight for a VM. Argument is used in the kernel fair scheduler. The larger the number is, the more CPU time this VM gets. Number is relative to weights of all the other running VMs. +description: <string> +Description for the VM. Shown in the web-interface VM’s summary. This is saved as comment inside the configuration file. +efidisk0: [file=]<volume> [,efitype=<2m|4m>] [,format=<enum>] [,pre-enrolled-keys=<1|0>] [,size=<DiskSize>] +Configure a Disk for storing EFI vars. +efitype=<2m | 4m> (default = 2m) +Size and type of the OVMF EFI vars. 4m is newer and recommended, and required for Secure Boot. For backwards compatibility, 2m is used if not otherwise specified. +file=<volume> +The drive’s backing volume. +format=<cloop | cow | qcow | qcow2 | qed | raw | vmdk> +The drive’s backing file’s data format. +pre-enrolled-keys=<boolean> (default = 0) +Use am EFI vars template with distribution-specific and Microsoft Standard keys enrolled, if used with efitype=4m. Note that this will enable Secure Boot by default, though it can still be turned off from within the VM. +size=<DiskSize> +Disk size. This is purely informational and has no effect. +freeze: <boolean> +Freeze CPU at startup (use c monitor command to start execution). +hookscript: <string> +Script that will be executed during various steps in the vms lifetime. +hostpci[n]: [host=]<HOSTPCIID[;HOSTPCIID2...]> [,device-id=<hex id>] [,legacy-igd=<1|0>] [,mdev=<string>] [,pcie=<1|0>] [,rombar=<1|0>] [,romfile=<string>] [,sub-device-id=<hex id>] [,sub-vendor-id=<hex id>] [,vendor-id=<hex id>] [,x-vga=<1|0>] +Map host PCI devices into guest. +This option allows direct access to host hardware. So it is no longer +possible to migrate such machines - use with special care. +Experimental! User reported problems with this option. +device-id=<hex id> +Override PCI device ID visible to guest +host=<HOSTPCIID[;HOSTPCIID2...]> +Host PCI device pass through. The PCI ID of a host’s PCI device or a list +of PCI virtual functions of the host. HOSTPCIID syntax is: +bus:dev.func (hexadecimal numbers) +You can us the lspci command to list existing PCI devices. +legacy-igd=<boolean> (default = 0) +Pass this device in legacy IGD mode, making it the primary and exclusive graphics device in the VM. Requires pc-i440fx machine type and VGA set to none. +mdev=<string> +The type of mediated device to use. +An instance of this type will be created on startup of the VM and +will be cleaned up when the VM stops. +pcie=<boolean> (default = 0) +Choose the PCI-express bus (needs the q35 machine model). +rombar=<boolean> (default = 1) +Specify whether or not the device’s ROM will be visible in the guest’s memory map. +romfile=<string> +Custom pci device rom filename (must be located in /usr/share/kvm/). +sub-device-id=<hex id> +Override PCI subsystem device ID visible to guest +sub-vendor-id=<hex id> +Override PCI subsystem vendor ID visible to guest +vendor-id=<hex id> +Override PCI vendor ID visible to guest +x-vga=<boolean> (default = 0) +Enable vfio-vga device support. +hotplug: <string> (default = network,disk,usb) +Selectively enable hotplug features. This is a comma separated list of hotplug features: network, disk, cpu, memory, usb and cloudinit. Use 0 to disable hotplug completely. Using 1 as value is an alias for the default network,disk,usb. USB hotplugging is possible for guests with machine version >= 7.1 and ostype l26 or windows > 7. +hugepages: <1024 | 2 | any> +Enable/disable hugepages memory. +ide[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,model=<model>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>] +Use volume as IDE hard disk or CD-ROM (n is 0 to 3). +aio=<io_uring | native | threads> +AIO type to use. +backup=<boolean> +Whether the drive should be included when making backups. +bps=<bps> +Maximum r/w speed in bytes per second. +bps_max_length=<seconds> +Maximum length of I/O bursts in seconds. +bps_rd=<bps> +Maximum read speed in bytes per second. +bps_rd_max_length=<seconds> +Maximum length of read I/O bursts in seconds. +bps_wr=<bps> +Maximum write speed in bytes per second. +bps_wr_max_length=<seconds> +Maximum length of write I/O bursts in seconds. +cache=<directsync | none | unsafe | writeback | writethrough> +The drive’s cache mode +cyls=<integer> +Force the drive’s physical geometry to have a specific cylinder count. +detect_zeroes=<boolean> +Controls whether to detect and try to optimize writes of zeroes. +discard=<ignore | on> +Controls whether to pass discard/trim requests to the underlying storage. +file=<volume> +The drive’s backing volume. +format=<cloop | cow | qcow | qcow2 | qed | raw | vmdk> +The drive’s backing file’s data format. +heads=<integer> +Force the drive’s physical geometry to have a specific head count. +iops=<iops> +Maximum r/w I/O in operations per second. +iops_max=<iops> +Maximum unthrottled r/w I/O pool in operations per second. +iops_max_length=<seconds> +Maximum length of I/O bursts in seconds. +iops_rd=<iops> +Maximum read I/O in operations per second. +iops_rd_max=<iops> +Maximum unthrottled read I/O pool in operations per second. +iops_rd_max_length=<seconds> +Maximum length of read I/O bursts in seconds. +iops_wr=<iops> +Maximum write I/O in operations per second. +iops_wr_max=<iops> +Maximum unthrottled write I/O pool in operations per second. +iops_wr_max_length=<seconds> +Maximum length of write I/O bursts in seconds. +mbps=<mbps> +Maximum r/w speed in megabytes per second. +mbps_max=<mbps> +Maximum unthrottled r/w pool in megabytes per second. +mbps_rd=<mbps> +Maximum read speed in megabytes per second. +mbps_rd_max=<mbps> +Maximum unthrottled read pool in megabytes per second. +mbps_wr=<mbps> +Maximum write speed in megabytes per second. +mbps_wr_max=<mbps> +Maximum unthrottled write pool in megabytes per second. +media=<cdrom | disk> (default = disk) +The drive’s media type. +model=<model> +The drive’s reported model name, url-encoded, up to 40 bytes long. +replicate=<boolean> (default = 1) +Whether the drive should considered for replication jobs. +rerror=<ignore | report | stop> +Read error action. +secs=<integer> +Force the drive’s physical geometry to have a specific sector count. +serial=<serial> +The drive’s reported serial number, url-encoded, up to 20 bytes long. +shared=<boolean> (default = 0) +Mark this locally-managed volume as available on all nodes. +This option does not share the volume automatically, it assumes it is shared already! +size=<DiskSize> +Disk size. This is purely informational and has no effect. +snapshot=<boolean> +Controls qemu’s snapshot mode feature. If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown. +ssd=<boolean> +Whether to expose this drive as an SSD, rather than a rotational hard disk. +trans=<auto | lba | none> +Force disk geometry bios translation mode. +werror=<enospc | ignore | report | stop> +Write error action. +wwn=<wwn> +The drive’s worldwide name, encoded as 16 bytes hex string, prefixed by 0x. +ipconfig[n]: [gw=<GatewayIPv4>] [,gw6=<GatewayIPv6>] [,ip=<IPv4Format/CIDR>] [,ip6=<IPv6Format/CIDR>] +cloud-init: Specify IP addresses and gateways for the corresponding interface. +IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified. +The special string dhcp can be used for IP addresses to use DHCP, in which case no explicit +gateway should be provided. +For IPv6 the special string auto can be used to use stateless autoconfiguration. This requires +cloud-init 19.4 or newer. +If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using +dhcp on IPv4. +gw=<GatewayIPv4> +Default gateway for IPv4 traffic. +Requires option(s): ip +gw6=<GatewayIPv6> +Default gateway for IPv6 traffic. +Requires option(s): ip6 +ip=<IPv4Format/CIDR> (default = dhcp) +IPv4 address in CIDR format. +ip6=<IPv6Format/CIDR> (default = dhcp) +IPv6 address in CIDR format. +ivshmem: size=<integer> [,name=<string>] +Inter-VM shared memory. Useful for direct communication between VMs, or to the host. +name=<string> +The name of the file. Will be prefixed with pve-shm-. Default is the VMID. Will be deleted when the VM is stopped. +size=<integer> (1 - N) +The size of the file in MB. +keephugepages: <boolean> (default = 0) +Use together with hugepages. If enabled, hugepages will not not be deleted after VM shutdown and can be used for subsequent starts. +keyboard: <da | de | de-ch | en-gb | en-us | es | fi | fr | fr-be | fr-ca | fr-ch | hu | is | it | ja | lt | mk | nl | no | pl | pt | pt-br | sl | sv | tr> +Keyboard layout for VNC server. This option is generally not required and is often better handled from within the guest OS. +kvm: <boolean> (default = 1) +Enable/disable KVM hardware virtualization. +localtime: <boolean> +Set the real time clock (RTC) to local time. This is enabled by default if the ostype indicates a Microsoft Windows OS. +lock: <backup | clone | create | migrate | rollback | snapshot | snapshot-delete | suspended | suspending> +Lock/unlock the VM. +machine: (pc|pc(-i440fx)?-\d+(\.\d+)+(\+pve\d+)?(\.pxe)?|q35|pc-q35-\d+(\.\d+)+(\+pve\d+)?(\.pxe)?|virt(?:-\d+(\.\d+)+)?(\+pve\d+)?) +Specifies the Qemu machine type. +memory: <integer> (16 - N) (default = 512) +Amount of RAM for the VM in MB. This is the maximum available memory when you use the balloon device. +migrate_downtime: <number> (0 - N) (default = 0.1) +Set maximum tolerated downtime (in seconds) for migrations. +migrate_speed: <integer> (0 - N) (default = 0) +Set maximum speed (in MB/s) for migrations. Value 0 is no limit. +name: <string> +Set a name for the VM. Only used on the configuration web interface. +nameserver: <string> +cloud-init: Sets DNS server IP address for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set. +net[n]: [model=]<enum> [,bridge=<bridge>] [,firewall=<1|0>] [,link_down=<1|0>] [,macaddr=<XX:XX:XX:XX:XX:XX>] [,mtu=<integer>] [,queues=<integer>] [,rate=<number>] [,tag=<integer>] [,trunks=<vlanid[;vlanid...]>] [,<model>=<macaddr>] +Specify network devices. +bridge=<bridge> +Bridge to attach the network device to. The Proxmox VE standard bridge +is called vmbr0. +If you do not specify a bridge, we create a kvm user (NATed) network +device, which provides DHCP and DNS services. The following addresses +are used: +10.0.2.2 Gateway +10.0.2.3 DNS Server +10.0.2.4 SMB Server +The DHCP server assign addresses to the guest starting from 10.0.2.15. +firewall=<boolean> +Whether this interface should be protected by the firewall. +link_down=<boolean> +Whether this interface should be disconnected (like pulling the plug). +macaddr=<XX:XX:XX:XX:XX:XX> +A common MAC address with the I/G (Individual/Group) bit not set. +model=<e1000 | e1000-82540em | e1000-82544gc | e1000-82545em | e1000e | i82551 | i82557b | i82559er | ne2k_isa | ne2k_pci | pcnet | rtl8139 | virtio | vmxnet3> +Network Card Model. The virtio model provides the best performance with very low CPU overhead. If your guest does not support this driver, it is usually best to use e1000. +mtu=<integer> (1 - 65520) +Force MTU, for VirtIO only. Set to 1 to use the bridge MTU +queues=<integer> (0 - 64) +Number of packet queues to be used on the device. +rate=<number> (0 - N) +Rate limit in mbps (megabytes per second) as floating point number. +tag=<integer> (1 - 4094) +VLAN tag to apply to packets on this interface. +trunks=<vlanid[;vlanid...]> +VLAN trunks to pass through this interface. +numa: <boolean> (default = 0) +Enable/disable NUMA. +numa[n]: cpus=<id[-id];...> [,hostnodes=<id[-id];...>] [,memory=<number>] [,policy=<preferred|bind|interleave>] +NUMA topology. +cpus=<id[-id];...> +CPUs accessing this NUMA node. +hostnodes=<id[-id];...> +Host NUMA nodes to use. +memory=<number> +Amount of memory this NUMA node provides. +policy=<bind | interleave | preferred> +NUMA allocation policy. +onboot: <boolean> (default = 0) +Specifies whether a VM will be started during system bootup. +ostype: <l24 | l26 | other | solaris | w2k | w2k3 | w2k8 | win10 | win11 | win7 | win8 | wvista | wxp> +Specify guest operating system. This is used to enable special +optimization/features for specific operating systems: +other +unspecified OS +wxp +Microsoft Windows XP +w2k +Microsoft Windows 2000 +w2k3 +Microsoft Windows 2003 +w2k8 +Microsoft Windows 2008 +wvista +Microsoft Windows Vista +win7 +Microsoft Windows 7 +win8 +Microsoft Windows 8/2012/2012r2 +win10 +Microsoft Windows 10/2016/2019 +win11 +Microsoft Windows 11/2022 +l24 +Linux 2.4 Kernel +l26 +Linux 2.6 - 5.X Kernel +solaris +Solaris/OpenSolaris/OpenIndiania kernel +parallel[n]: /dev/parport\d+|/dev/usb/lp\d+ +Map host parallel devices (n is 0 to 2). +This option allows direct access to host hardware. So it is no longer possible to migrate such +machines - use with special care. +Experimental! User reported problems with this option. +protection: <boolean> (default = 0) +Sets the protection flag of the VM. This will disable the remove VM and remove disk operations. +reboot: <boolean> (default = 1) +Allow reboot. If set to 0 the VM exit on reboot. +rng0: [source=]</dev/urandom|/dev/random|/dev/hwrng> [,max_bytes=<integer>] [,period=<integer>] +Configure a VirtIO-based Random Number Generator. +max_bytes=<integer> (default = 1024) +Maximum bytes of entropy allowed to get injected into the guest every period milliseconds. Prefer a lower value when using /dev/random as source. Use 0 to disable limiting (potentially dangerous!). +period=<integer> (default = 1000) +Every period milliseconds the entropy-injection quota is reset, allowing the guest to retrieve another max_bytes of entropy. +source=</dev/hwrng | /dev/random | /dev/urandom> +The file on the host to gather entropy from. In most cases /dev/urandom should be preferred over /dev/random to avoid entropy-starvation issues on the host. Using urandom does not decrease security in any meaningful way, as it’s still seeded from real entropy, and the bytes provided will most likely be mixed with real entropy on the guest as well. /dev/hwrng can be used to pass through a hardware RNG from the host. +sata[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>] +Use volume as SATA hard disk or CD-ROM (n is 0 to 5). +aio=<io_uring | native | threads> +AIO type to use. +backup=<boolean> +Whether the drive should be included when making backups. +bps=<bps> +Maximum r/w speed in bytes per second. +bps_max_length=<seconds> +Maximum length of I/O bursts in seconds. +bps_rd=<bps> +Maximum read speed in bytes per second. +bps_rd_max_length=<seconds> +Maximum length of read I/O bursts in seconds. +bps_wr=<bps> +Maximum write speed in bytes per second. +bps_wr_max_length=<seconds> +Maximum length of write I/O bursts in seconds. +cache=<directsync | none | unsafe | writeback | writethrough> +The drive’s cache mode +cyls=<integer> +Force the drive’s physical geometry to have a specific cylinder count. +detect_zeroes=<boolean> +Controls whether to detect and try to optimize writes of zeroes. +discard=<ignore | on> +Controls whether to pass discard/trim requests to the underlying storage. +file=<volume> +The drive’s backing volume. +format=<cloop | cow | qcow | qcow2 | qed | raw | vmdk> +The drive’s backing file’s data format. +heads=<integer> +Force the drive’s physical geometry to have a specific head count. +iops=<iops> +Maximum r/w I/O in operations per second. +iops_max=<iops> +Maximum unthrottled r/w I/O pool in operations per second. +iops_max_length=<seconds> +Maximum length of I/O bursts in seconds. +iops_rd=<iops> +Maximum read I/O in operations per second. +iops_rd_max=<iops> +Maximum unthrottled read I/O pool in operations per second. +iops_rd_max_length=<seconds> +Maximum length of read I/O bursts in seconds. +iops_wr=<iops> +Maximum write I/O in operations per second. +iops_wr_max=<iops> +Maximum unthrottled write I/O pool in operations per second. +iops_wr_max_length=<seconds> +Maximum length of write I/O bursts in seconds. +mbps=<mbps> +Maximum r/w speed in megabytes per second. +mbps_max=<mbps> +Maximum unthrottled r/w pool in megabytes per second. +mbps_rd=<mbps> +Maximum read speed in megabytes per second. +mbps_rd_max=<mbps> +Maximum unthrottled read pool in megabytes per second. +mbps_wr=<mbps> +Maximum write speed in megabytes per second. +mbps_wr_max=<mbps> +Maximum unthrottled write pool in megabytes per second. +media=<cdrom | disk> (default = disk) +The drive’s media type. +replicate=<boolean> (default = 1) +Whether the drive should considered for replication jobs. +rerror=<ignore | report | stop> +Read error action. +secs=<integer> +Force the drive’s physical geometry to have a specific sector count. +serial=<serial> +The drive’s reported serial number, url-encoded, up to 20 bytes long. +shared=<boolean> (default = 0) +Mark this locally-managed volume as available on all nodes. +This option does not share the volume automatically, it assumes it is shared already! +size=<DiskSize> +Disk size. This is purely informational and has no effect. +snapshot=<boolean> +Controls qemu’s snapshot mode feature. If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown. +ssd=<boolean> +Whether to expose this drive as an SSD, rather than a rotational hard disk. +trans=<auto | lba | none> +Force disk geometry bios translation mode. +werror=<enospc | ignore | report | stop> +Write error action. +wwn=<wwn> +The drive’s worldwide name, encoded as 16 bytes hex string, prefixed by 0x. +scsi[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,queues=<integer>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,scsiblock=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>] +Use volume as SCSI hard disk or CD-ROM (n is 0 to 30). +aio=<io_uring | native | threads> +AIO type to use. +backup=<boolean> +Whether the drive should be included when making backups. +bps=<bps> +Maximum r/w speed in bytes per second. +bps_max_length=<seconds> +Maximum length of I/O bursts in seconds. +bps_rd=<bps> +Maximum read speed in bytes per second. +bps_rd_max_length=<seconds> +Maximum length of read I/O bursts in seconds. +bps_wr=<bps> +Maximum write speed in bytes per second. +bps_wr_max_length=<seconds> +Maximum length of write I/O bursts in seconds. +cache=<directsync | none | unsafe | writeback | writethrough> +The drive’s cache mode +cyls=<integer> +Force the drive’s physical geometry to have a specific cylinder count. +detect_zeroes=<boolean> +Controls whether to detect and try to optimize writes of zeroes. +discard=<ignore | on> +Controls whether to pass discard/trim requests to the underlying storage. +file=<volume> +The drive’s backing volume. +format=<cloop | cow | qcow | qcow2 | qed | raw | vmdk> +The drive’s backing file’s data format. +heads=<integer> +Force the drive’s physical geometry to have a specific head count. +iops=<iops> +Maximum r/w I/O in operations per second. +iops_max=<iops> +Maximum unthrottled r/w I/O pool in operations per second. +iops_max_length=<seconds> +Maximum length of I/O bursts in seconds. +iops_rd=<iops> +Maximum read I/O in operations per second. +iops_rd_max=<iops> +Maximum unthrottled read I/O pool in operations per second. +iops_rd_max_length=<seconds> +Maximum length of read I/O bursts in seconds. +iops_wr=<iops> +Maximum write I/O in operations per second. +iops_wr_max=<iops> +Maximum unthrottled write I/O pool in operations per second. +iops_wr_max_length=<seconds> +Maximum length of write I/O bursts in seconds. +iothread=<boolean> +Whether to use iothreads for this drive +mbps=<mbps> +Maximum r/w speed in megabytes per second. +mbps_max=<mbps> +Maximum unthrottled r/w pool in megabytes per second. +mbps_rd=<mbps> +Maximum read speed in megabytes per second. +mbps_rd_max=<mbps> +Maximum unthrottled read pool in megabytes per second. +mbps_wr=<mbps> +Maximum write speed in megabytes per second. +mbps_wr_max=<mbps> +Maximum unthrottled write pool in megabytes per second. +media=<cdrom | disk> (default = disk) +The drive’s media type. +queues=<integer> (2 - N) +Number of queues. +replicate=<boolean> (default = 1) +Whether the drive should considered for replication jobs. +rerror=<ignore | report | stop> +Read error action. +ro=<boolean> +Whether the drive is read-only. +scsiblock=<boolean> (default = 0) +whether to use scsi-block for full passthrough of host block device +can lead to I/O errors in combination with low memory or high memory fragmentation on host +secs=<integer> +Force the drive’s physical geometry to have a specific sector count. +serial=<serial> +The drive’s reported serial number, url-encoded, up to 20 bytes long. +shared=<boolean> (default = 0) +Mark this locally-managed volume as available on all nodes. +This option does not share the volume automatically, it assumes it is shared already! +size=<DiskSize> +Disk size. This is purely informational and has no effect. +snapshot=<boolean> +Controls qemu’s snapshot mode feature. If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown. +ssd=<boolean> +Whether to expose this drive as an SSD, rather than a rotational hard disk. +trans=<auto | lba | none> +Force disk geometry bios translation mode. +werror=<enospc | ignore | report | stop> +Write error action. +wwn=<wwn> +The drive’s worldwide name, encoded as 16 bytes hex string, prefixed by 0x. +scsihw: <lsi | lsi53c810 | megasas | pvscsi | virtio-scsi-pci | virtio-scsi-single> (default = lsi) +SCSI controller model +searchdomain: <string> +cloud-init: Sets DNS search domains for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set. +serial[n]: (/dev/.+|socket) +Create a serial device inside the VM (n is 0 to 3), and pass through a +host serial device (i.e. /dev/ttyS0), or create a unix socket on the +host side (use qm terminal to open a terminal connection). +If you pass through a host serial device, it is no longer possible to migrate such machines - +use with special care. +Experimental! User reported problems with this option. +shares: <integer> (0 - 50000) (default = 1000) +Amount of memory shares for auto-ballooning. The larger the number is, the more memory this VM gets. Number is relative to weights of all other running VMs. Using zero disables auto-ballooning. Auto-ballooning is done by pvestatd. +smbios1: [base64=<1|0>] [,family=<Base64 encoded string>] [,manufacturer=<Base64 encoded string>] [,product=<Base64 encoded string>] [,serial=<Base64 encoded string>] [,sku=<Base64 encoded string>] [,uuid=<UUID>] [,version=<Base64 encoded string>] +Specify SMBIOS type 1 fields. +base64=<boolean> +Flag to indicate that the SMBIOS values are base64 encoded +family=<Base64 encoded string> +Set SMBIOS1 family string. +manufacturer=<Base64 encoded string> +Set SMBIOS1 manufacturer. +product=<Base64 encoded string> +Set SMBIOS1 product ID. +serial=<Base64 encoded string> +Set SMBIOS1 serial number. +sku=<Base64 encoded string> +Set SMBIOS1 SKU string. +uuid=<UUID> +Set SMBIOS1 UUID. +version=<Base64 encoded string> +Set SMBIOS1 version. +smp: <integer> (1 - N) (default = 1) +The number of CPUs. Please use option -sockets instead. +sockets: <integer> (1 - N) (default = 1) +The number of CPU sockets. +spice_enhancements: [foldersharing=<1|0>] [,videostreaming=<off|all|filter>] +Configure additional enhancements for SPICE. +foldersharing=<boolean> (default = 0) +Enable folder sharing via SPICE. Needs Spice-WebDAV daemon installed in the VM. +videostreaming=<all | filter | off> (default = off) +Enable video streaming. Uses compression for detected video streams. +sshkeys: <string> +cloud-init: Setup public SSH keys (one key per line, OpenSSH format). +startdate: (now | YYYY-MM-DD | YYYY-MM-DDTHH:MM:SS) (default = now) +Set the initial date of the real time clock. Valid format for date are:'now' or 2006-06-17T16:01:21 or 2006-06-17. +startup: `[[order=]\d+] [,up=\d+] [,down=\d+] ` +Startup and shutdown behavior. Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped. +tablet: <boolean> (default = 1) +Enable/disable the USB tablet device. This device is usually needed to allow absolute mouse positioning with VNC. Else the mouse runs out of sync with normal VNC clients. If you’re running lots of console-only guests on one host, you may consider disabling this to save some context switches. This is turned off by default if you use spice (qm set <vmid> --vga qxl). +tags: <string> +Tags of the VM. This is only meta information. +tdf: <boolean> (default = 0) +Enable/disable time drift fix. +template: <boolean> (default = 0) +Enable/disable Template. +tpmstate0: [file=]<volume> [,size=<DiskSize>] [,version=<v1.2|v2.0>] +Configure a Disk for storing TPM state. The format is fixed to raw. +file=<volume> +The drive’s backing volume. +size=<DiskSize> +Disk size. This is purely informational and has no effect. +version=<v1.2 | v2.0> (default = v2.0) +The TPM interface version. v2.0 is newer and should be preferred. Note that this cannot be changed later on. +unused[n]: [file=]<volume> +Reference to unused volumes. This is used internally, and should not be modified manually. +file=<volume> +The drive’s backing volume. +usb[n]: [host=]<HOSTUSBDEVICE|spice> [,usb3=<1|0>] +Configure an USB device (n is 0 to 4, for machine version >= 7.1 and ostype l26 or windows > 7, n can be up to 14). +host=<HOSTUSBDEVICE|spice> +The Host USB device or port or the value spice. HOSTUSBDEVICE syntax is: +'bus-port(.port)*' (decimal numbers) or +'vendor_id:product_id' (hexadeciaml numbers) or +'spice' +You can use the lsusb -t command to list existing usb devices. +This option allows direct access to host hardware. So it is no longer possible to migrate such +machines - use with special care. +The value spice can be used to add a usb redirection devices for spice. +usb3=<boolean> (default = 0) +Specifies whether if given host option is a USB3 device or port. For modern guests (machine version >= 7.1 and ostype l26 and windows > 7), this flag is irrelevant (all devices are plugged into a xhci controller). +vcpus: <integer> (1 - N) (default = 0) +Number of hotplugged vcpus. +vga: [[type=]<enum>] [,memory=<integer>] +Configure the VGA Hardware. If you want to use high resolution modes (>= 1280x1024x16) you may need to increase the vga memory option. Since QEMU 2.9 the default VGA display type is std for all OS types besides some Windows versions (XP and older) which use cirrus. The qxl option enables the SPICE display server. For win* OS you can select how many independent displays you want, Linux guests can add displays them self. +You can also run without any graphic card, using a serial device as terminal. +memory=<integer> (4 - 512) +Sets the VGA memory (in MiB). Has no effect with serial display. +type=<cirrus | none | qxl | qxl2 | qxl3 | qxl4 | serial0 | serial1 | serial2 | serial3 | std | virtio | virtio-gl | vmware> (default = std) +Select the VGA type. +virtio[n]: [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] +Use volume as VIRTIO hard disk (n is 0 to 15). +aio=<io_uring | native | threads> +AIO type to use. +backup=<boolean> +Whether the drive should be included when making backups. +bps=<bps> +Maximum r/w speed in bytes per second. +bps_max_length=<seconds> +Maximum length of I/O bursts in seconds. +bps_rd=<bps> +Maximum read speed in bytes per second. +bps_rd_max_length=<seconds> +Maximum length of read I/O bursts in seconds. +bps_wr=<bps> +Maximum write speed in bytes per second. +bps_wr_max_length=<seconds> +Maximum length of write I/O bursts in seconds. +cache=<directsync | none | unsafe | writeback | writethrough> +The drive’s cache mode +cyls=<integer> +Force the drive’s physical geometry to have a specific cylinder count. +detect_zeroes=<boolean> +Controls whether to detect and try to optimize writes of zeroes. +discard=<ignore | on> +Controls whether to pass discard/trim requests to the underlying storage. +file=<volume> +The drive’s backing volume. +format=<cloop | cow | qcow | qcow2 | qed | raw | vmdk> +The drive’s backing file’s data format. +heads=<integer> +Force the drive’s physical geometry to have a specific head count. +iops=<iops> +Maximum r/w I/O in operations per second. +iops_max=<iops> +Maximum unthrottled r/w I/O pool in operations per second. +iops_max_length=<seconds> +Maximum length of I/O bursts in seconds. +iops_rd=<iops> +Maximum read I/O in operations per second. +iops_rd_max=<iops> +Maximum unthrottled read I/O pool in operations per second. +iops_rd_max_length=<seconds> +Maximum length of read I/O bursts in seconds. +iops_wr=<iops> +Maximum write I/O in operations per second. +iops_wr_max=<iops> +Maximum unthrottled write I/O pool in operations per second. +iops_wr_max_length=<seconds> +Maximum length of write I/O bursts in seconds. +iothread=<boolean> +Whether to use iothreads for this drive +mbps=<mbps> +Maximum r/w speed in megabytes per second. +mbps_max=<mbps> +Maximum unthrottled r/w pool in megabytes per second. +mbps_rd=<mbps> +Maximum read speed in megabytes per second. +mbps_rd_max=<mbps> +Maximum unthrottled read pool in megabytes per second. +mbps_wr=<mbps> +Maximum write speed in megabytes per second. +mbps_wr_max=<mbps> +Maximum unthrottled write pool in megabytes per second. +media=<cdrom | disk> (default = disk) +The drive’s media type. +replicate=<boolean> (default = 1) +Whether the drive should considered for replication jobs. +rerror=<ignore | report | stop> +Read error action. +ro=<boolean> +Whether the drive is read-only. +secs=<integer> +Force the drive’s physical geometry to have a specific sector count. +serial=<serial> +The drive’s reported serial number, url-encoded, up to 20 bytes long. +shared=<boolean> (default = 0) +Mark this locally-managed volume as available on all nodes. +This option does not share the volume automatically, it assumes it is shared already! +size=<DiskSize> +Disk size. This is purely informational and has no effect. +snapshot=<boolean> +Controls qemu’s snapshot mode feature. If activated, changes made to the disk are temporary and will be discarded when the VM is shutdown. +trans=<auto | lba | none> +Force disk geometry bios translation mode. +werror=<enospc | ignore | report | stop> +Write error action. +vmgenid: <UUID> (default = 1 (autogenerated)) +The VM generation ID (vmgenid) device exposes a 128-bit integer value identifier to the guest OS. This allows to notify the guest operating system when the virtual machine is executed with a different configuration (e.g. snapshot execution or creation from a template). The guest operating system notices the change, and is then able to react as appropriate by marking its copies of distributed databases as dirty, re-initializing its random number generator, etc. +Note that auto-creation only works when done through API/CLI create or update methods, but not when manually editing the config file. +vmstatestorage: <string> +Default storage for VM state volumes/files. +watchdog: [[model=]<i6300esb|ib700>] [,action=<enum>] +Create a virtual hardware watchdog device. Once enabled (by a guest action), the watchdog must be periodically polled by an agent inside the guest or else the watchdog will reset the guest (or execute the respective action specified) +action=<debug | none | pause | poweroff | reset | shutdown> +The action to perform if after activation the guest fails to poll the watchdog in time. +model=<i6300esb | ib700> (default = i6300esb) +Watchdog type to emulate. +Locks +Online migrations, snapshots and backups (vzdump) set a lock to prevent +incompatible concurrent actions on the affected VMs. Sometimes you need to +remove such a lock manually (for example after a power failure). +# qm unlock <vmid> +Only do that if you are sure the action which set the lock is +no longer running. +See Also +Cloud-Init Support + + +``` + +## PCI Passthrough + +```wiki +== Introduction == + +{{Note|This is a collection of examples, workarounds, hacks, and specific issues for PCI(e) passthrough. For a step-by-step guide on how and what to do to pass through PCI(e) devices, see [https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough the docs] or [[PCI(e)_Passthrough|the wiki page generated from the docs]]}} + +PCI passthrough allows you to use a physical PCI device (graphics card, network card) inside a VM (KVM virtualization only). + +If you "PCI passthrough" a device, the device is not available to the host anymore. Note that VMs with passed-through devices cannot be migrated. + +== Requirements == + +This is a list of basic requirements adapted from [https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Prerequisites the Arch wiki] + +; CPU requirements: +: Your CPU has to support hardware virtualization and IOMMU. Most new CPUs support this. +* AMD: CPUs from the Bulldozer generation and newer, CPUs from the K10 generation need a 890FX or 990FX motherboard. +* Intel: [https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&0_VTD=True list of VT-d capable Intel CPUs] + +; Motherboard requirements: +: Your motherboard needs to support IOMMU. Lists can be found on [https://wiki.xenproject.org/wiki/VTd_HowTo the Xen wiki] and [https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware Wikipedia]. Note that, as of writing, both these lists are incomplete and very out-of-date and most newer motherboards support IOMMU. + +; GPU requirements: +: The ROM of your GPU does not necessarily need to support UEFI, however, most modern GPUs do. If you GPU ROM supports UEFI, it is recommended to use OVMF (UEFI) instead of SeaBIOS. For a list of GPU ROMs, see [https://www.techpowerup.com/vgabios/?architecture=&manufacturer=&model=&version=&interface=&memType=&memSize=&since= Techpowerup's collection of GPU ROMs] + +== Verifying IOMMU parameters == +=== Verify IOMMU is enabled === + +Reboot, then run: + dmesg | grep -e DMAR -e IOMMU + +There should be a line that looks like "DMAR: IOMMU enabled". If there is no output, something is wrong. + +=== Verify IOMMU interrupt remapping is enabled === + +It is not possible to use PCI passthrough without interrupt remapping. Device assignment will fail with 'Failed to assign device "[device name]": Operation not permitted' or 'Interrupt Remapping hardware not found, passing devices to unprivileged domains is insecure.'. + +All systems using an Intel processor and chipset that have support for Intel Virtualization Technology for Directed I/O (VT-d), but do not have support for interrupt remapping will see such an error. Interrupt remapping support is provided in newer processors and chipsets (both AMD and Intel). + +To identify if your system has support for interrupt remapping: + +
+dmesg | grep 'remapping'
+
+ +If you see one of the following lines: + +* AMD-Vi: Interrupt remapping enabled +* DMAR-IR: Enabled IRQ remapping in x2apic mode ('x2apic' can be different on old CPUs, but should still work) + +then remapping is supported. + +If your system doesn't support interrupt remapping, you can allow unsafe interrupts with: + +
+echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
+
+ +=== Verify IOMMU isolation === + +For working PCI passthrough, you need a dedicated IOMMU group for all PCI devices you want to assign to a VM. + +When executing + + # pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist "" + +replacing {nodename} with the name of your node. + +You should get a list similar to: + +
+┌──────────┬────────┬──────────────┬────────────┬────────┬───────────────────────────────────────────────────────────────────┬...
+│ class    │ device │ id           │ iommugroup │ vendor │ device_name                                                       │
+╞══════════╪════════╪══════════════╪════════════╪════════╪═══════════════════════════════════════════════════════════════════╪
+│ 0x010601 │ 0xa282 │ 0000:00:17.0 │          5 │ 0x8086 │ 200 Series PCH SATA controller [AHCI mode]                        │
+├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼
+│ 0x010802 │ 0xa808 │ 0000:02:00.0 │         12 │ 0x144d │ NVMe SSD Controller SM981/PM981/PM983                             │
+├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼
+│ 0x020000 │ 0x15b8 │ 0000:00:1f.6 │         11 │ 0x8086 │ Ethernet Connection (2) I219-V                                    │
+├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼
+│ 0x030000 │ 0x5912 │ 0000:00:02.0 │          2 │ 0x8086 │ HD Graphics 630                                                   │
+├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼
+│ 0x030000 │ 0x1d01 │ 0000:01:00.0 │          1 │ 0x10de │ GP108 [GeForce GT 1030]                                           │
+├──────────┼────────┼──────────────┼────────────┼────────┼───────────────────────────────────────────────────────────────────┼
+.
+.
+.
+
+ +To have separate IOMMU groups, your processor needs to have support for a feature called ACS (Access Control Services). Make sure you enable the corresponding setting in your BIOS for this. + +If you don't have dedicated IOMMU groups, you can try moving the card to another PCI slot. + +Should that not work, you can try using [https://lkml.org/lkml/2013/5/30/513 Alex Williamson's ACS override patch]. However, this should be seen as a last option +and is [http://vfio.blogspot.be/2014/08/iommu-groups-inside-and-out.html not without risks]. + +As of writing, the ACS patch is part of the Proxmox VE kernel and can be invoked via [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline Editing the kernel command line]. Add + pcie_acs_override=downstream +to the kernel boot command line (grub or systemd-boot) options. + +More information can be found at [http://vfio.blogspot.com/ Alex Williamson's blog]. + +== GPU passthrough == + +{{Note|See http://blog.quindorian.org/2018/03/building-a-2u-amd-ryzen-server-proxmox-gpu-passthrough.html/ if you like an article with a How-To approach. (NOTE: you usually do not need the ROM-file dumping mentioned at the end!)}} + +* AMD RADEON 5xxx, 6xxx, 7xxx, NVIDIA GeForce 7, 8, GTX 4xx, 5xx, 6xx, 7xx, 9xx, 10xx, 15xx, 16xx, and RTX 20xx have been reported working. Anything newer should work as well. +* AMD Navi (5xxx(XT)/6xxx(XT)) suffer from the reset bug (see https://github.com/gnif/vendor-reset), and while dedicated users have managed to get them to run, they require a lot more effort and will probably not work entirely stable (see the [[PCI_Passthrough#AMD_specific_issues|AMD specific issues]] for workarounds). +* You might need to load some specific options in grub.cfg or other tuning values to get your configuration specifically working/stable +* Here's a good forum thread of Arch Linux: https://bbs.archlinux.org/viewtopic.php?id=162768 + +For starters, it's often helpful if the host doesn't try to use the GPU, which avoids issues with the host driver unbinding and re-binding to the device. Sometimes making sure the host BIOS POST messages are displayed on a different GPU is helpful too. This can sometimes be acomplished via BIOS settings, moving the card to a different slot or enabling/disabling legacy boot support. + +=== Blacklisting drivers === + +The following is a list of common drivers and how to blacklist them: + +* AMD GPUs +
+echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf
+echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
+
+* NVIDIA GPUs +
+echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf 
+echo "blacklist nvidia*" >> /etc/modprobe.d/blacklist.conf 
+
+* Intel GPUs +
+echo "blacklist i915" >> /etc/modprobe.d/blacklist.conf
+
+{{Note | If you are using an Intel iGPU and an Intel discrete GPU, blacklisting the Intel 'i915' drivers that the discrete GPU uses means the iGPU won't be able to use those drivers either.}} + +After blacklisting, you will need to reboot. + +=== How to know if a graphics card is UEFI (OVMF) compatible === +Have a look at [[PCI passthrough#Requirements|the requirements section]]. Chances are you are using the BIOS listed for your device on the Techpowerup GPU ROM list, which will say if it is UEFI compatible or not. + +Alternatively, you can dump your ROM and use Alex Williams rom-parser tool: + +{{ Note | You will want to run the following commands logged in as root user (by running su -) or by wrapping them with sudo sh -c "", otherwise the bash-redirects in the code-snippets below won't work}} + +Get and compile the software "rom-parser": + git clone https://github.com/awilliam/rom-parser + cd rom-parser + make + +Then dump the rom of you vga card: + cd /sys/bus/pci/devices/0000:01:00.0/ + echo 1 > rom + cat rom > /tmp/image.rom + echo 0 > rom + +and test it with: + ./rom-parser /tmp/image.rom + +The output should look like this: + + Valid ROM signature found @0h, PCIR offset 190h + PCIR: type 0, vendor: 10de, device: 1280, class: 030000 + PCIR: revision 0, vendor revision: 1 + Valid ROM signature found @f400h, PCIR offset 1ch + PCIR: type 3, vendor: 10de, device: 1280, class: 030000 + PCIR: revision 3, vendor revision: 0 + EFI: Signature Valid + Last image + +To be UEFI compatible, you need a "type 3" in the result. + +=== The 'romfile' option === + +Some motherboards can't pass through GPUs on the first PCI(e) slot by default, because its vBIOS is shadowed during boot up. You need to capture its vBIOS when it is working "normally" (i.e. installed in a different slot), then you can move the card to slot 1 and start the vm using the dumped vBIOS. + +To dump the bios: +
+cd /sys/bus/pci/devices/0000:01:00.0/
+echo 1 > rom
+cat rom > /usr/share/kvm/vbios.bin
+echo 0 > rom
+
+ +Then you can pass the vbios file (must be located in /usr/share/kvm/) with: +
+hostpci0: 01:00,x-vga=on,romfile=vbios.bin
+
+ +=== Tips === + +Some Windows applications like GeForce Experience, Passmark Performance Test and SiSoftware Sandra can crash the VM. +You need to add: +
+echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
+
+ +If you see a lot of warning messages in your 'dmesg' system log, add the following instead: +
+echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.conf
+
+ +==== Nvidia Tips ==== +User have reported that NVIDIA Kepler K80 GPUs need this in vmid.conf: +
+args: -machine pc,max-ram-below-4g=1G
+
+ +== Troubleshooting == + +=== "BAR 3: can't reserve [mem]" error === + +If you have this error when you try to use the card for a VM: +
+vfio-pci 0000:04:00.0: BAR 3: can't reserve [mem 0xca000000-0xcbffffff 64bit]
+
+ +you can try to add the following kernel command line option: +
+video=efifb:off
+
+ +Check out the documentation about [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline editing the kernel command line]. + +=== WSLg (Windows Subsystem for Linux GUI)=== +If GUI apps don't open in WSLg, see [https://pve.proxmox.com/wiki/Windows_2022_guest_best_practices#Installing_WSL.28g.29 Windows 2022 guest best practices]. + +=== Black display in NoVNC/Spice === + +If you are passing through a GPU and are getting a black screen, you might need to change your display settings in the Guest OS. On Windows, this can be done by pressing the "Super/Windows" and "P" key. Alternatively, if you are using the GPU for hardware accelerated computing and need no graphical output from it, you can deselect the "primary GPU" option and physically disconnect your GPU. + +=== Spice === + +Spice may give trouble when passing through a GPU as it presents a "virtual" PCI graphic card to the guest and some drivers have problems with that, even when both cards show up. +It's always worth a try to disable SPICE and check again if something fails. + +=== HDMI audio crackling/broken === + +Some digital audio devices (usually added via GPU functions) may require MSI (Message Signaled Interrupts) to be enabled to function correctly. If you experience any issues, try changing MSI settings in the guest and rebooting the guest. + +Linux guests usually enable MSI by themselves. To force use of MSI for GPU audio devices, use the following command and reboot: + +
+echo "options snd-hda-intel enable_msi=1" >> /etc/modprobe.d/snd-hda-intel.conf
+
+ +Use 'lspci -vv' and check for the following line on your device to see if MSI is enabled: + +
+Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
+
+ +If it says 'Enable+', MSI is working, 'Enable-' means it is supported but disabled, and if the line is missing, MSI is not supported by the PCIe hardware. + +This can potentially also improve performance for other passthrough devices, including GPUs, but that depends on the hardware being used. + +=== BIOS options === + +Make sure you are using the most recent BIOS version for you motherboard. Often IOMMU groupings or passthrough support in general is improved in later versions. + +Some general BIOS options that might need changing to allow passthrough to work: + +* IOMMU or VT-d: Set to 'Enabled' or equivalent, often 'Auto' is not the same +* 'Legacy boot' or CSM: For GPU passthrough it can help to disable this, but keep in mind that PVE has to be installed in UEFI mode, as it will not boot in BIOS mode without this enabled. The reason for disabling this is that it avoids legacy VGA initialization of installed GPUs, making them able to be re-initialized later, as required for passthrough. Most useful when trying to use passthrough in single GPU systems. +* 'Resizable BAR'/'Smart Access Memory': Some AMD GPUs (Vega and up) experience 'Code 43' in Windows guests if this is enabled on the host. It's not supported in VMs either way (yet), so the recommended setting is 'off'. + +=== Error 43 === +[https://support.microsoft.com/en-us/windows/fix-graphics-device-problems-with-error-code-43-6f6ae1ec-0bbe-a848-142e-0c6190502842 Error code 43] is a generic Windows driver error and can occur for a wide number of reasons. Things you can try troubleshooting include: + +==== Finding out if the PCI device has a hardware fault ==== +* Try passing the PCI device to a Linux VM +* Try plugging the PCI device into a different PCI slot or into a different machine + +==== Finding software issues ==== +* Check the security event logs of your Windows VM +* Check the dmesg logs of your host machine +* [[PCI Passthrough#How_to_know_if_a_Graphics_Card_is_UEFI_.28OVMF.29_compatible|Dump your vBIOS]] and check if it is working correctly. +* Try a different vbios (see [[PCI_passthrough#Requirements| the GPU requirements section]]) +* If your GPU supports resizable BAR/SAM and you have this option set in your BIOS, you might need to deactivate it or manually tweak your BAR using an udev rule (see [https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Code_43_while_Resizable_Bar_is_turned_on_in_the_bios Code 43 while Resizable Bar is turned on in the bios] in the Arch wiki) +* Sometimes the issue is very hardware-dependent. You might find someone else who found a solution who has the same hardware. Try searching the internet with keywords containing your hardware, together with keywords like "Proxmox", "KVM", or "Qemu". + +==== Nvidia specific issues ==== + +When passing through mobile- or vGPUs, it might be necessary to spoof the Vendor ID and Hardware ID as if the passed-through GPU were the desktop variant. Changing the IDs might also be needed to remove manufacturer-specific vendor ID variants that are not recognized otherwise. + +The Vendor and Device ID can be added in the web interface under "Hardware" -> "PCI Device (hostpciX)" and then clicking on the "Advanced" checkbox. + +Some software will also refuse to run when it detects that it is running in a VM. This should no longer be an issue with Nvidia drivers 465 and newer. + +To find the Vendor ID and Device ID of the card installed on your host, run: + lspci -nn +which will give you something similar to + 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP108 [GeForce GT 1030] [10de:1d01] (rev a1) +Here, 0x10de is the Vendor ID and 0x1d01 the Device ID. + +==== AMD specific issues ==== +Some AMD cards suffer from the "AMD reset bug" where the GPU does not correctly reset after power cycling. This can be remedied with the [https://github.com/gnif/vendor-reset/ vendor-reset patch]. See also [https://www.nicksherlock.com/2020/11/working-around-the-amd-gpu-reset-bug-on-proxmox/ Nick Sherlock's writeup] on the issue. + +== USB passthrough == +If you need to pass through USB devices (keyboard, mouse), please follow the [[USB Physical Port Mapping]] wiki article. + +== vGPU == +If you want to split up one GPU into multiple vGPUs, see: +* [https://pve.proxmox.com/wiki/MxGPU_with_AMD_S7150_under_Proxmox_VE_5.x MxGPU with AMD S7150] +* [https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE_7.x NVIDIA vGPU] + +[[Category:Staging]] +``` +## vGPUs + +```wiki +== Introduction == + +This is a Testing Report and How-To for using the MxGPU feature of an AMD S7150 Graphics card under PVE 5.x. +These cards can provide hardware-accelerated 3D graphics to multiple VMs with a single card instead of using one card per VM (normal PCI passthrough) or using a software 3D graphics card (QXL/Spice). + +AMDs open source GIM driverGIM Open Source Driver. https://github.com/GPUOpen-LibrariesAndSDKs/MxGPU-Virtualization is needed on the host. + +'''WARNING: Our tests showed that this may be unstable and experimental, please see the 'Notes' section below for more details.''' + +== Hardware Notes == + +We tested the card in the following configurations: +{| class="wikitable" +|- +! Works !! Hardware Type !! Mainboard !! CPU !! Memory !! Errors !! Notes +|- +| No || Consumer || ASUS Z170-A || Intel 6700k || 32GB DDR4 Memory || Loading GIM failed with a PCI Bus Error that it did not have sufficient resources. || The firmware of the Mainboard is not suited for this use. +|- +| No || Low-end Server || Supermicro X10SDV-6C-TLN4F || Intel Xeon D-1528 || 32GB DDR4 Memory || PCI Bus Errors during use resulting in guest and host crashes. || The Platform is not suited for this use. +|- +| Yes || High-end Server || Supermicro H11SSL-i || AMD Epyc 7351P 16-Core Processor || 64 GB DDR4 Memory || Linux guest instability || OPROM for this card has to be set to Legacy. +|} + +== Host Configuration == + +Make sure that the 'amdgpu' module is blacklisted before installing the card. +This can be done via a file in /etc/modprobe.d/. For example put + + blacklist amdgpu + +into /etc/modprobe.d/blacklist-amdgpu.conf + +Do not forget to update the initramfshttps://pve.proxmox.com/pve-docs/chapter-qm.html#qm_pci_passthrough_update_initramfs afterwards. + +After that, you have to compile and install GIM. For this, you need at least the packages 'git', 'pve-headers', 'gcc' and 'make'. + +See their documentation about how you can compile and configure the module. + +You can install the module via DKMSDebian DKMS documentation. https://wiki.debian.org/KernelDKMS (dynamic kernel module support), +then the module gets automatically recompiled on every kernel upgrade. + +After installing the module, you can do + + modprobe gim + +and now you should see the virtual functions via 'lspci'. +Example output: + ... + 41:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] TongaXT GL [FirePro S7150] + 41:02.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] TongaXTV GL [FirePro S7150V] + 41:02.1 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] TongaXTV GL [FirePro S7150V] + ... + +Those Devices (FirePro S7150V) can now be passed through via the standard PCI passthrough mechanismPCI Passthrough Documentation https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_pci_passthrough. + +== Client Configuration == + +=== Windows 10 === + +Create a new VM, pass through a Virtual GPU, and install Windows 10. + +After that, enable Remote Desktop and install the Radeon Pro DriversAMD Radeon Pro Drivers for S7150 https://www.amd.com/en/support/professional-graphics/firepro/firepro-s-series/firepro-s7150-active-cooling. + +After a reboot of the VM, you can now connect via Remote Desktop to the VM and use the graphics card. + + +File:Windows 10 Blender.png|Rendering with OpenCL in Blender in Windows 10 +File:Windows 10 Unigine Valley.png|Running Unigine Valley Benchmark in Windows 10 + + +=== Ubuntu 18.04 === + +Create a new VM without passing through the Virtual GPU yet, and install Ubuntu 18.04. + +After that, install the amdgpu-pro driver from AMDs homepage. + +While testing, we found that the AMDGPU Pro driver Version 18.40AMDGPU Pro Driver 18.40. https://www.amd.com/en/support/kb/release-notes/rn-prorad-lin-18-40 works most of the time. (18.30 did load but produced many guest kernel errors and prevented the use of it; 18.50 resulted in guest kernel oopses). + +Install a desktop environment (for example XFCE with the meta-package xubuntu-desktop) and a display manager (for instance lightdm). + +Install a VNC Serverhttps://help.ubuntu.com/community/VNC/Servers + (or similar) to be able to access a local X server and configure it to start automatically. + +Now power off the VM, add the virtual function and start the VM again. + +Note: Depending on the exact guest kernel and driver version, there may be some kernel errors and warning even if it is working. + +At this point, you should be able to connect via VNC (or another protocol) and use the virtual GPU. + + +File:Valley ubuntu 18.04.png|Unigine Valley on Ubuntu 18.04 +File:Ubuntu supertuxkart.png||Super Tux Kart on Ubuntu 18.04 + + +== Notes == + +=== Stability === + +In our tests the Linux guest drivers were very unstable. It worked with a single Ubuntu guest, but led to chrashes/hangs (of the guests and the host) after another guest was started, regardless of the client OS of the other guests. + +The Windows guest drivers worked more stable, but there were occasional resets/crashes and sometimes blue screens in the guest after starting multiple Windows guests, this only occurred when at least one Linux guest was started since boot, so only starting and using Windows guests should work. (The relevant bug report is here: https://github.com/GPUOpen-LibrariesAndSDKs/MxGPU-Virtualization/issues/16) + +=== Debugging === + +For debugging purposes, AMD includes the useful tool 'GRU' with the sources of GIM. This is found in the 'utils/gru' folder and can simply be built with +'make'. + +You can use this Tool to see which functions are in use and how much resources are used. Also, it provides a mechanism to reset the card and its functions. +Here is an example output: + + GRU + Copyright (C) 2017~2018 Advanced Micro Devices, Inc. + + Type 'help' for help. Optional launch parameter is index of card to use. + GRU> status + + +-----+--------------+----------+------------+-----------+--------------------+ + | GPU | Name | Cur Volt | GFX EngClk | Mem Usage | Current DPM Level | + | | BusId | Temp | Avail VF | GFX Usage | Power Usage | + +=====+==============+==========+============+===========+====================+ + | 0 | S7150 | 0.5750 V | 313.10 MHz | 49.79 % | 1 | + | | 0000:41:00.0 | 57.00 C | 2 | 45.17 % | 33.37 W | + +-----+--------------+----------+------------+-----------+--------------------+ + GRU> list + + +-----+--------------+---------+------------+--------+------------+-----------+ + | GPU | Name | DPM Cap | FB Size | Max VF | GFX Engine | PL Speed | + | | BusId | PWR Cap | Encoder | ECC | MAX Clock | PL Width | + +=====+==============+=========+============+========+============+===========+ + | 0 | S7150 | 8 | 8190 M | 4 | GFX8 | 8 GT/s | + | | 0000:41:00.0 | 109 W | None | No | 1000 MHz | x16 | + +-----+--------------+---------+------------+--------+------------+-----------+ + GRU> open 0000:41:00.0 + GRU>GPU:41:00.0> list + + +----+--------+--------------+------------+-----------+---------+-------------+ + | VF | Type | BusId | Name | VF State | VF Size | GFX EngPart | + +====+========+==============+============+===========+=========+=============+ + | 0 | S7150 | 0000:41:02.0 | MxGPU_V1_4 | Active | 1968 M | 25% | + +----+--------+--------------+------------+-----------+---------+-------------+ + | 1 | S7150 | 0000:41:02.1 | MxGPU_V1_4 | Active | 1968 M | 25% | + +----+--------+--------------+------------+-----------+---------+-------------+ + | 2 | S7150 | 0000:41:02.2 | MxGPU_V1_4 | Available | 1968 M | 25% | + +----+--------+--------------+------------+-----------+---------+-------------+ + | 3 | S7150 | 0000:41:02.3 | MxGPU_V1_4 | Available | 1968 M | 25% | + +----+--------+--------------+------------+-----------+---------+-------------+ + GRU>GPU:41:00.0> status + + +----+-------------+--------------+--------------+--------------+-------------+ + | VF | Type | BusId | Active Time | Running Time | Reset Times | + +====+=============+==============+==============+==============+=============+ + | 0 | S7150 | 0000:41:02.0 | 0:41:48 | 0:59:29 | 0 | + +----+-------------+--------------+--------------+--------------+-------------+ + | 1 | S7150 | 0000:41:02.1 | 0:14:13 | 0:31:50 | 0 | + +----+-------------+--------------+--------------+--------------+-------------+ + | 2 | S7150 | 0000:41:02.2 | 0:0:0 | 0:0:0 | 0 | + +----+-------------+--------------+--------------+--------------+-------------+ + | 3 | S7150 | 0000:41:02.3 | 0:0:0 | 0:0:0 | 0 | + +----+-------------+--------------+--------------+--------------+-------------+ + GRU>GPU:41:00.0> + +==References== + + +[[Category: HOWTO]] +[[Category: Qemu/KVM]] +``` + +## NVIDIA vGPU on Proxmox VE + +```wiki== Introduction == + +NVIDIA vGPU technology enables multiple virtual machines to use a single supportedNVIDIA GPUs supported by vGPU https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html physical GPU. + +This article explains how to use NVIDIA vGPU on Proxmox VE. The instructions were tested using an RTX A5000. + +== Disclaimer == + +At the time of writing, Proxmox VE is not an officially supported platform for NVIDIA vGPU. This means that even with valid vGPU licenses, you may not be eligible for NVIDIA enterprise support for this use-case. +However, Proxmox VE's kernel is derived from the Ubuntu kernel, which is a supported platform for NVIDIA vGPU as of 2024. + +Note that although we are using some consumer hardware in this article, for optimal performance in production workloads, we recommend using appropriate enterprise-grade hardware. +Please refer to NVIDIA's support page to verify hardware compatibility +NVIDIA vGPU Certified Servers https://www.nvidia.com/en-us/data-center/resources/vgpu-certified-servers/ +NVIDIA GPUs supported by vGPU https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html. + +== Hardware Setup == + +We're using the following hardware configuration for our test: + +{| class="wikitable" +|+ Test System +|- +| CPU || Intel Core i7-12700K +|- +| Motherboard || ASUS PRIME Z690-A +|- +| Memory || 128 GB DDR5 Memory: 4x Crucial CT32G48C40U5 +|- +| GPU || PNY NVIDIA RTX A5000 +|} + +Some NVIDIA GPUs do not have vGPU enabled by default, even though they support vGPU, like the RTX A5000 we tested. To enable vGPU there, switch the display using the NVIDIA Display Mode Selector ToolNVIDIA Display Mode Selector Tool https://developer.nvidia.com/displaymodeselector. +This will disable the display ports. + +For a list of GPUs where this is necessary check their documentationLatest NVIDIA vGPU user guide: Switching the Mode of a GPU that Supports Multiple Display Modes https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#displaymodeselector. + +The installation was tested on the following versions of Proxmox VE, Linux kernel, and NVIDIA drivers: + +{| class="wikitable" +|- +! pve-manager !! kernel !! vGPU Software Branch !! NVIDIA Host drivers +|- +| 7.2-7 || 5.15.39-2-pve || 14.1 || 510.73.06 +|- +| 7.2-7 || 5.15.39-2-pve || 14.2 || 510.85.03 +|- +| 7.4-3 || 5.15.107-2-pve || 15.2 || 525.105.14 +|- +| 7.4-17 || 6.2.16-20-bpo11-pve || 16.0 || 535.54.06 +|- +| 8.1.4 || 6.5.11-8-pve || 16.3 || 535.154.02 +|- +| 8.1.4 || 6.5.13-1-pve || 16.3 || 535.154.02 +|} + +It is recommended to use the latest stable and supported version of Proxmox VE and NVIDIA drivers. +However, newer versions in one vGPU Software Branch should also work for the same or older kernel version. + +Since version 16.0, certain cards are no longer supported by the NVIDIA vGPU driver, but are supported by the Enterprise AI driver +NVIDIA GPUs supported by vGPU https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html +NVIDIA GPUs supported by AI Enterprise https://docs.nvidia.com/ai-enterprise/latest/product-support-matrix/index.html. +We have tested the Enterprise AI driver with an A16 and vGPU technology and found that it behaves similarly to the old vGPU driver. Therefore, the following steps also apply. + +== Preparation == + +Before actually installing the host drivers, there are a few steps to be done on the Proxmox VE host. + +'''Tip''': If you need to use a root shell, you can, for example, open one by connecting via SSH or using the node shell on the Proxmox VE web interface. + +=== Enable PCIe Passthrough === + +Make sure that your system is compatible with PCIe passthrough. See the [https://pve.proxmox.com/wiki/PCI(e)_Passthrough PCI(e) Passthrough] documentation. + +Additionally, confirm that the following features are enabled in your firmware settings (BIOS/UEFI): + +* VT-d for Intel, or AMD-v for AMD (sometimes named IOMMU) +* SR-IOV (this may not be necessary for older pre-Ampere GPU generations) +* Above 4G decoding +* PCI AER (Advanced Error Reporting) +* PCI ASPM (Active State Power Management) + +The firmware of your host might use different naming. If you are unable to locate some of these options, refer to the documentation provided by your firmware or motherboard manufacturer. + +'''Note''': It is crucial to ensure that both the IOMMU options are enabled in your firmware and the kernel. + +=== Setup Proxmox VE Repositories === + +Proxmox VE's comes with the enterprise repository set up by default as this repository provides better tested software and is recommended for production use. +The enterprise repository needs a valid subscription per node. For evaluation or non-production use cases you can simply switch to the public no-subscription repository. This provides the same feature-set but with a higher frequency of updates. + +You can use the Repositories management panel in the Proxmox VE web UI for managing package repositories, see the [https://pve.proxmox.com/wiki/Package_Repositories documentation] for details. + +=== Update to Latest Package Versions === + +Proxmox VE uses a rolling release model and should be updated frequently to ensure that your Proxmox VE installation has the latest bug and security fixes, and features available. + +You can update your Proxmox VE node using the update panel on the web UI. + +=== Blacklist the Nouveau Driver === + +Next, you want to blacklist the open source nouveau kernel module to avoid it from interfering with the one from NVIDIA. + +To do that, add a line with blacklist nouveau to a file in the /etc/modprobe.d/ directory. +For example, open a root shell and execute: + echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf + +Then, [https://pve.proxmox.com/wiki/PCI(e)_Passthrough#qm_pci_passthrough_update_initramfs update your initramfs], to ensure that the module is blocked from loading at early boot, and then reboot your host. + +=== Setup DKMS === + +Because the NVIDIA module is separate from the kernel, it must be rebuilt with Dynamic Kernel Module Support (DKMS) for each new kernel update. + +To set up DKMS, you must install the headers package for the kernel and the DKMS helper package. In a root shell, run + + apt update + apt install dkms libc6-dev proxmox-default-headers --no-install-recommends + +'''Note''': If you do not have the default kernel version installed, but for example an opt-in kernel, you must install the appropriate proxmox-headers-X.Y package instead of proxmox-default-headers. + +== Host Driver Installation == + +'''Note''': The driver/file versions shown in this section are examples only; use the correct file names for the selected driver you're installing. + +To get started, you will need the appropriate host and guest drivers; see the NVIDIA Virtual GPU Software Quick Start GuideGetting your NVIDIA GRID Software: https://docs.nvidia.com/grid/latest/grid-software-quick-start-guide/index.html#getting-your-nvidia-grid-software for instructions on how to obtain them. +Choose Linux KVM as target hypervisor when downloading. + +In our case we got the following host driver file: + NVIDIA-Linux-x86_64-525.105.14-vgpu-kvm.run +Copy this file over to your Proxmox VE node. + +To start the installation, you need to make the installer executable first, and then pass the --dkms option when running it, to ensure that the module is rebuilt after a kernel upgrade: + chmod +x NVIDIA-Linux-x86_64-525.105.14-vgpu-kvm.run + ./NVIDIA-Linux-x86_64-525.105.14-vgpu-kvm.run --dkms +Follow the steps of the installer. + +After the installer has finished successfully, you will need to reboot your system, either using the web interface or by executing reboot. + +=== Enabling SR-IOV === + +On some NVIDIA GPUs (for example, those based on the Ampere architecture), you must first enable SR-IOV before being able to use vGPUs. +You can do that with the sriov-manage script from NVIDIA. + + /usr/lib/nvidia/sriov-manage -e + + +Since that setting gets lost on reboot, it might be a good idea to write a cronjob or systemd service to enable it on reboot. + +Here is an example systemd service for enabling SR-IOV on all found NVIDIA GPUs: + +
+[Unit]
+Description=Enable NVIDIA SR-IOV
+After=network.target nvidia-vgpud.service nvidia-vgpu-mgr.service
+Before=pve-guests.service
+
+[Service]
+Type=oneshot
+ExecStart=/usr/lib/nvidia/sriov-manage -e ALL
+
+[Install]
+WantedBy=multi-user.target
+
+ +Depending on the actual hardware, it might be necessary to give the nvidia-vgpud.service a bit more time to start, you can do that by adding + ExecStartPre=/bin/sleep 5 +just before the ExecStart line in the service file (replace '5' by an appropriate amount of seconds). + +You can save this in /usr/local/lib/systemd/system/nvidia-sriov.service. Then enable and start it with: + + systemctl daemon-reload + systemctl enable --now nvidia-sriov.service + +This will then run after the nvidia-daemons got started, but before the Proxmox VE virtual guest auto start-up. + +Verify that there are multiple virtual functions for your device with: + + # lspci -d 10de: + +In our case there are now 24 virtual functions in addition to the physical card (01:00.0): + + 01:00.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:00.4 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:00.5 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:00.6 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:00.7 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:01.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:01.1 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:01.2 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:01.3 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:01.4 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:01.5 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:01.6 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:01.7 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:02.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:02.1 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:02.2 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:02.3 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:02.4 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:02.5 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:02.6 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:02.7 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:03.0 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:03.1 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:03.2 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + 01:03.3 3D controller: NVIDIA Corporation GA102GL [RTX A5000] (rev a1) + +== Guest Configuration == + +=== General Setup === + +First, set up a VM as you normally would, without adding a vGPU. + +After configuring the VM to your liking, shut down the VM and add a vGPU by selecting one of the virtual functions and selecting the appropriate mediated device type. + +For example: + +Via the CLI: + + qm set VMID -hostpci0 01:00.4,mdev=nvidia-660 + +Via the web interface: + +[[File:Pve select vgpu.png|500px|none|Selecting a vGPU model]] + +To find the correct mediated device type, you can use sysfs. +Here is a sample shell script that prints the type, then the name (which corresponds to the NVIDIA documentation) and the description, which contains helpful information (such as the maximum number of instances available). +Adjust the PCI path to your needs: + +
+#!/bin/sh
+set -e
+
+for i in /sys/bus/pci/devices/0000:01:00.4/mdev_supported_types/*; do
+    basename "$i"
+    cat "$i/name"
+    cat "$i/description"
+    echo
+done
+
+ +Since pve-manager version 7.2-8 and libpve-common-perl version 7.2-3, the GUI shows the correct name for the type. + +If your qemu-server version is below 7.2-4, you must add an additional parameter +to the vm: + + # qm set VMID -args '-uuid ' + +The UUID of the mediated device is automatically generated from the VMID and the hostpciX index of the config, where the host PCI index is used as the first part and the VMID as the last part. +For example, if you configure hostpci2 for VM with VMID 12345, the generated UUID will be + + 00000002-0000-0000-0000-000000012345 + +You can now start the VM and continue configuring the guest from within. + +We tested a Windows 10 and Ubuntu 22.04 installation, but the setup will be similar for other supported operating systems. + +=== Windows 10 === + +First install and configure a desktop sharing software that matches your requirements. Some +examples of such software include: + +* '''VNC'''
many different options, some free, some commercial +* '''Remote Desktop'''
built into Windows itself +* '''Parsec'''
Costs money for commercial use, allows using hardware accelerated encoding +* '''RustDesk'''
free and open source, but relatively new as of 2022 + +We used simple Windows built-in remote desktop for testing. + +[[File:Windows rdp.png|thumb|Enabling Remote Desktop in Windows 10]] + +Then you can install the Windows guest driver that is published by NVIDIA. +Refer to their documentationNVIDIA Virtual GPU (vGPU) Software Documentation https://docs.nvidia.com/grid/to find a compatible guest driver to host driver mapping. +In our case this was the file + + 528.89_grid_win10_win11_server2019_server2022_dch_64bit_international.exe + +Start the installer and follow the instructions, then, after it finished restart the guest as prompted. + + +Windows nv install01.png|Starting NVIDIA driver installation +Windows nv install02.png|Accepting the license agreement +Windows nv install03.png|Finishing the installation + + +From this point on, the integrated noVNC console of PVE will not be usable anymore, so use +your desktop sharing software to connect to the guest. Now you can use the vGPU for starting +3D applications such as Blender, 3D games, etc. + + +Windows valley.png|Unigine Valley +Windows supertuxkart.png|SuperTuxKart +Windows blender.png|Blender + + +=== Ubuntu 22.04 Desktop === + +Before installing the guest driver, install and configure a desktop sharing software, for example: + +* '''VNC'''
many options. We use x11vnc here, which is free and open source, but does not currently provide hardware accelerated encoding +* '''NoMachine'''
provides hardware accelerated encoding, but is not open source and costs money for business use +* '''RustDesk'''
free and open source, but relatively new as of 2022 + +We installed x11vnc in this example. While we're showing how to install and configure it, this is not the only way to achieve the goal of having properly configured desktop sharing. + +Since Ubuntu 22.04 ships GDM3 + Gnome + Wayland per default, you first need to switch the login manager to one that uses X.org. +We successfully tested LightDM here, but others may work as well. + + # apt install lightdm + +Select 'LightDM' as default login manager when prompted. After that install x11vnc with + + # apt install x11vnc + +We then added a systemd service that starts the VNC server on the x.org server provided by LightDM in /etc/systemd/system/x11vnc.service + +
+[Unit]
+Description=Start x11vnc
+After=multi-user.target
+
+[Service]
+Type=simple
+ExecStart=/usr/bin/x11vnc -display :0 -auth /var/run/lightdm/root/:0 -forever -loop -repeat -rfbauth /etc/x11vnc.passwd -rfbport 5900 -shared -noxdamage
+
+[Install]
+WantedBy=multi-user.target
+
+ +You can set the password by executing: + + # x11vnc -storepasswd /etc/x11vnc.passwd + # chmod 0400 /etc/x11vnc.passwd + +After setting up LightDM and x11vnc and restarting the VM, you can connect via VNC. + +Now, install the .deb package that NVIDIA provides for Ubuntu. +Check the NVIDIA documentation for a compatible guest driver to host driver mapping. + +In our case this was nvidia-linux-grid-525_525.105.17_amd64.deb, and we directly installed from the local file using apt. +For that to work you must prefix the relative path, for example ./ if the .deb file is located in the current directory. + + # apt install ./nvidia-linux-grid-525_525.105.17_amd64.deb + +Then you must use NVIDIA's tools to configure the x.org configuration with: + + # nvidia-xconfig + +Now you can reboot and use a VNC client to connect and use the vGPU for 3D applications. + + +Ubuntu valley.png|Unigine Valley +Nv Ubuntu supertuxkart.png|SuperTuxKart +Ubuntu blender.png|Blender + + +{{Note| If you want to use CUDA on a Linux Guest, you must install the CUDA Toolkit manuallyNVIDIA CUDA Toolkit Download https://developer.nvidia.com/cuda-downloads. +Check the NVIDIA documentation which version of CUDA is supported for your vGPU drivers. + +In our case we needed to install CUDA 11.6 (only the toolkit, not the driver) with the file: + + cuda_11.6.2_510.47.03_linux.run +|warn}} + +=== Guest vGPU Licensing === + +To use the vGPU without restriction, you must adhere to NVIDIA's licensing. +Check the NVIDIA vGPU documentationNVIDIA GRID Licensing User Guide: https://docs.nvidia.com/grid/latest/grid-licensing-user-guide/index.html for instructions on how to do so. + +'''Tip''': Ensure that the guest system time is properly synchronized using NTP, otherwise the guest will be unable to request a license for the vGPU. + +== Notes == + + +[[Category: HOWTO]] +[[Category: Qemu/KVM]] +``` \ No newline at end of file diff --git a/tech_docs/virtualization/proxmox_virtualmachines.md b/tech_docs/virtualization/proxmox_virtualmachines.md new file mode 100644 index 0000000..fd5e0ab --- /dev/null +++ b/tech_docs/virtualization/proxmox_virtualmachines.md @@ -0,0 +1,1095 @@ +```bash +ls /etc/pve/qemu-server +``` + +```bash +cat /etc/pve/qemu-server/500.conf +``` + +```bash +vi /etc/pve/qemu-server/500.conf +``` + +## VM 500 (Debian 12): +```bash +agent: 1 +balloon: 1024 +bios: ovmf +boot: order=scsi0;net0 +cores: 4 +cpu: host,hidden=1,flags=+pcid +kvm: 1 +machine: q35 +memory: 4096 +name: debian-12 +net0: virtio=BC:24:11:85:09:34,bridge=vmbr0,firewall=1,queues=4 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-500-disk-0,size=64G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3692 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` +## Test VDI + +```bash +agent: 1 +balloon: 1024 +bios: ovmf +boot: order=scsi0;net0 +cores: 4 +cpu: host,hidden=1,flags=+pcid +kvm: 1 +memory: 4096 +meta: creation-qemu=8.1.5,ctime=1714341537 +name: remote-0 +net0: virtio=BC:24:11:3C:B4:65,bridge=vmbr0,firewall=1 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-500-disk-0,discard=on,iothread=1,size=32G +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=599b304c-31b5-4ff8-a443-2c78c5b8fa25 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl +tpmstate0: local-lvm:vm-500-disk-0,size=4M,version=v2.0 +vmgenid: ad9c43fe-6abb-4085-8e43-2da12435abeb +``` + +## VM 600 (Ubuntu 22.04): +```bash +agent: 1 +balloon: 2048 +bios: ovmf +boot: order=scsi0;net0 +cores: 8 +cpu: host,hidden=1,flags=+pcid +kvm: 1 +machine: q35 +memory: 8192 +name: ubuntu-22-04 +net0: virtio=BC:24:11:85:09:35,bridge=vmbr0,firewall=1,queues=8 +net1: virtio=BC:24:11:3B:2E:95,bridge=vmbr1,firewall=1,queues=8 +numa: 1 +numa0: memory=4096,hostnodes=0,cpus=0-3 +numa1: memory=4096,hostnodes=1,cpus=4-7 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-600-disk-0,size=128G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3693 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` + +## VM 700 (Rocky Linux 9): +```bash +agent: 1 +balloon: 1536 +bios: ovmf +boot: order=scsi0;net0 +cores: 6 +cpu: host,hidden=1,flags=+pcid +kvm: 1 +machine: q35 +memory: 6144 +name: rocky-linux-9 +net0: virtio=BC:24:11:85:09:36,bridge=vmbr0,firewall=1,queues=6 +net1: virtio=BC:24:11:3B:2E:96,bridge=vmbr1,firewall=1,queues=6 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-700-disk-0,size=96G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3694 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819c +``` + +## VM 800 (Windows 10 VDI): +```bash +agent: 1 +balloon: 2048 +bios: ovmf +boot: order=scsi0;net0 +cores: 4 +cpu: host,hidden=1,flags=+pcid,hv-vendor-id=microsoft +kvm: 1 +machine: q35 +memory: 8192 +name: windows-10-vdi +net0: virtio=BC:24:11:85:09:37,bridge=vmbr0,firewall=1,queues=4 +numa: 0 +onboot: 1 +ostype: win10 +scsi0: zfs-disk0:vm-800-disk-0,size=128G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3695 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819d +``` + +## VM 810 (Windows 7 VDI for General Office Use): +```bash +agent: 1 +balloon: 1024 +bios: ovmf +boot: order=scsi0;ide2;net0 +cores: 2 +cpu: host,hidden=1,flags=+pcid,hv-vendor-id=microsoft +ide2: zfs-disk0:iso/Win7_Pro_SP1_English_x64.iso,media=cdrom,size=4G +kvm: 1 +machine: q35 +memory: 4096 +name: windows-7-vdi +net0: virtio=BC:24:11:85:09:38,bridge=vmbr0,firewall=1,queues=2 +numa: 0 +onboot: 1 +ostype: win7 +scsi0: zfs-disk0:vm-810-disk-0,size=64G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3696 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819e +``` + +## VM 860 (Windows Server 2016 for Active Directory): +```bash +agent: 1 +balloon: 2048 +bios: ovmf +boot: order=scsi0;net0 +cores: 4 +cpu: host,hidden=1,flags=+pcid,hv-vendor-id=microsoft +kvm: 1 +machine: q35 +memory: 8192 +name: windows-server-2016-ad +net0: virtio=BC:24:11:85:09:39,bridge=vmbr0,firewall=1,queues=4 +net1: virtio=BC:24:11:3B:2E:97,bridge=vmbr1,firewall=1,queues=4 +numa: 0 +onboot: 1 +ostype: win2016 +scsi0: zfs-disk0:vm-860-disk-0,size=128G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3697 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819f +``` + +## VM 870 (Windows 10 Gaming with GPU Passthrough): +```bash +agent: 1 +args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off' +balloon: 1024 +bios: ovmf +boot: order=scsi0;net0 +cores: 8 +cpu: host,hidden=1,flags=+pcid,hv-vendor-id=nvidia +efidisk0: zfs-disk0:vm-870-disk-1,size=128K +hostpci0: 01:00,pcie=1,x-vga=1,romfile=vbios_patched.bin,rombar=0 +hostpci1: 01:00.1 +hotplug: network,usb +ivshmem: size=128,name=looking-glass +kvm: 1 +machine: pc-q35-7.2 +memory: 16384 +name: windows-10-gaming +net0: virtio=BC:24:11:85:09:3A,bridge=vmbr0,firewall=1,tag=700 +numa: 0 +onboot: 1 +ostype: win10 +scsi0: zfs-disk0:vm-870-disk-0,size=256G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3698 +sockets: 1 +tablet: 1 +usb0: host=04b4:0101 +usb1: host=258a:0001 +vga: none +vmgenid: 137fbfab-cc44-4246-a558-67369061819e +``` + +--- + +Certainly! Based on the documentation and context you provided, I have created optimized configurations for two new Windows-based VMs: VM 810 (Windows 7 VDI for general office use) and VM 860 (Windows Server 2016 for Active Directory purposes). These configurations incorporate the best practices and optimizations discussed earlier. + +VM 810 (Windows 7 VDI for General Office Use): +```bash +agent: 1 +balloon: 1024 +bios: seabios +boot: order=scsi0;ide2;net0 +cores: 2 +cpu: host,flags=+pcid,hidden=1,kvm=off +hotplug: disk,network,usb +ide2: zfs-disk0:iso/Win7_Pro_SP1_English_x64.iso,media=cdrom,size=4G +kvm: 1 +memory: 4096 +name: windows-7-vdi +net0: virtio=BC:24:11:85:09:38,bridge=vmbr0,firewall=1,tag=500 +numa: 0 +onboot: 1 +ostype: win7 +scsi0: zfs-disk0:vm-810-disk-0,size=64G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3696 +sockets: 1 +tablet: 1 +vga: std +vmgenid: 137fbfab-cc44-4246-a558-67369061819e +``` + +Key points for VM 810: +- Optimized CPU and memory configuration for a general office use VDI. +- Used SeaBIOS (`bios: seabios`) for better compatibility with Windows 7. +- Included an ISO file (`ide2`) for easy installation of Windows 7. +- Enabled the `tablet` device for improved input handling in the VDI environment. +- Used the `std` VGA adapter since Windows 7 doesn't require advanced graphics. + +VM 860 (Windows Server 2016 for Active Directory): +```bash +agent: 1 +balloon: 2048 +bios: ovmf +boot: order=scsi0;net0 +cores: 4 +cpu: host,flags=+pcid,hidden=1,kvm=off +hotplug: disk,network,usb +kvm: 1 +memory: 8192 +name: windows-server-2016-ad +net0: virtio=BC:24:11:85:09:39,bridge=vmbr0,firewall=1,tag=600 +net1: virtio=BC:24:11:3B:2E:97,bridge=vmbr1,firewall=1,tag=601 +numa: 0 +onboot: 1 +ostype: win2016 +scsi0: zfs-disk0:vm-860-disk-0,size=128G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3697 +sockets: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819f +``` + +Key points for VM 860: +- Allocated more CPU cores and memory to handle Active Directory workloads. +- Used OVMF (`bios: ovmf`) for better compatibility and features with Windows Server 2016. +- Configured two network interfaces (`net0` and `net1`) for separating management and AD traffic. +- Provided a larger disk size (`size=128G`) to accommodate AD database growth. +- Used QXL VGA (`vga: serial0,type=qxl`) for improved remote management graphics. + +These configurations follow the best practices and optimizations discussed in the provided documentation. They include CPU and memory optimizations, disk and network abstractions, guest agent integration, and appropriate virtual hardware settings for each use case. + +Remember to thoroughly test these configurations in your environment and make any necessary adjustments based on your specific requirements and performance observations. + +VM 870 (Windows 10 Gaming with GPU Passthrough): +```bash +agent: 1 +args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off' +balloon: 1024 +bios: ovmf +boot: order=scsi0;net0 +cores: 8 +cpu: host,hidden=1,flags=+pcid,hv-vendor-id=nvidia +efidisk0: zfs-disk0:vm-870-disk-1,size=128K +hostpci0: 01:00,pcie=1,x-vga=1,romfile=vbios_patched.bin,rombar=0 +hostpci1: 01:00.1 +hotplug: network,usb +ivshmem: size=128,name=looking-glass +kvm: 1 +machine: pc-q35-7.2 +memory: 16384 +name: windows-10-gaming +net0: virtio=BC:24:11:85:09:3A,bridge=vmbr0,firewall=1,tag=700 +numa: 0 +onboot: 1 +ostype: win10 +scsi0: zfs-disk0:vm-870-disk-0,size=256G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3698 +sockets: 1 +tablet: 1 +usb0: host=04b4:0101 +usb1: host=258a:0001 +vga: none +vmgenid: 137fbfab-cc44-4246-a558-67369061819e +``` + +Key points for VM 870: +- Assigned 8 CPU cores and 16GB of memory for optimal gaming performance. +- Used the Q35 machine type (`machine: pc-q35-7.2`) for better compatibility with PCIe passthrough. +- Configured GPU passthrough with `hostpci0` and `hostpci1` for dedicated graphics performance. +- Included `romfile` and `rombar` options for GPU VBIOS compatibility. +- Added `args` with specific CPU flags and `hv-vendor-id` for better performance and compatibility. +- Set `kvm=off` in `args` to hide KVM virtualization from the guest. +- Allocated a larger disk (`size=256G`) for game installations and storage. +- Configured an `ivshmem` device for shared memory communication with Looking Glass. +- Passed through specific USB devices (`usb0` and `usb1`) for direct input handling. +- Set `vga: none` since the GPU is passed through directly. +- Created an EFI disk (`efidisk0`) for OVMF boot support. + +This configuration is tailored for a high-performance gaming experience with full GPU passthrough and optimized CPU and memory settings. The OS abstraction is achieved through the use of hidden KVM virtualization, specific CPU flags, and the Q35 machine type. + +The `ivshmem` device is configured to facilitate seamless remote gaming with Looking Glass, allowing you to access the gaming VM's display and input remotely. + +Remember to replace the `hostpci0` and `hostpci1` values with the actual PCI IDs of your GPU, and ensure that the GPU supports passthrough and has a compatible VBIOS. Also, adjust the USB device passthrough (`usb0` and `usb1`) based on your specific input devices. + +As always, thoroughly test the configuration and make any necessary adjustments based on your specific hardware and performance requirements. + +Absolutely! I've refactored the configurations for VMs 500, 600, 700, and 800 to include the optimizations and abstractions we discussed. Here are the updated configurations: + +VM 500 (Debian 12): +```bash +agent: 1 +balloon: 1024 +bios: seabios +boot: order=scsi0;net0 +cores: 4 +cpu: host,flags=+pcid,kvm=off +hotplug: disk,network,usb +kvm: 1 +memory: 4096 +name: debian-12 +net0: virtio=BC:24:11:85:09:34,bridge=vmbr0,firewall=1,tag=100 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-500-disk-0,size=64G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3692 +sockets: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` + +VM 600 (Ubuntu 22.04): +```bash +agent: 1 +balloon: 2048 +bios: ovmf +boot: order=scsi0;net0 +cores: 8 +cpu: host,flags=+pcid,hidden=1,kvm=off +hotplug: disk,network,usb +kvm: 1 +memory: 8192 +name: ubuntu-22-04 +net0: virtio=BC:24:11:85:09:35,bridge=vmbr0,firewall=1,tag=200 +net1: virtio=BC:24:11:3B:2E:95,bridge=vmbr1,firewall=1,tag=201 +numa: 1 +numa0: memory=4096,hostnodes=0,cpus=0-3 +numa1: memory=4096,hostnodes=1,cpus=4-7 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-600-disk-0,size=128G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3693 +sockets: 1 +vga: serial0,type=qxl,memory=128 +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` + +VM 700 (Rocky Linux 9): +```bash +agent: 1 +balloon: 1536 +bios: seabios +boot: order=scsi0;net0 +cores: 6 +cpu: host,flags=+pcid,hidden=1,kvm=off +hotplug: disk,network,usb +kvm: 1 +memory: 6144 +name: rocky-linux-9 +net0: virtio=BC:24:11:85:09:36,bridge=vmbr0,firewall=1,tag=300 +net1: virtio=BC:24:11:3B:2E:96,bridge=vmbr1,firewall=1,tag=301 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-700-disk-0,size=96G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3694 +sockets: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819c +``` + +VM 800 (Windows 10 VDI): +```bash +agent: 1 +balloon: 2048 +bios: ovmf +boot: order=scsi0;net0 +cores: 4 +cpu: host,flags=+pcid,hidden=1,kvm=off +hotplug: disk,network,usb +kvm: 1 +memory: 8192 +name: windows-10-vdi +net0: virtio=BC:24:11:85:09:37,bridge=vmbr0,firewall=1,tag=400 +numa: 0 +onboot: 1 +ostype: win10 +scsi0: zfs-disk0:vm-800-disk-0,size=128G,ssd=1,discard=on,iothread=1,cache=none,format=qcow2 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3695 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl,memory=256 +vmgenid: 137fbfab-cc44-4246-a558-67369061819d +``` + +The key changes and additions made to the configurations are: + +1. Added the `balloon` option to enable memory ballooning for dynamic memory management. +2. Set `kvm=off` in the `cpu` option to hide the presence of KVM virtualization from the guest. +3. Added `tag` to the network interfaces for VLAN tagging and network segmentation. +4. Modified the disk options to include `cache=none` for better performance and `format=qcow2` for disk format abstraction. +5. Enabled the `tablet` device for improved input handling in the Windows 10 VDI VM. + +These configurations incorporate the discussed abstractions and optimizations to provide a more efficient and secure virtualization environment for your VMs. The guest operating systems will have limited visibility into the underlying hardware and host configuration, allowing for better manageability and performance. + +Remember to test these configurations thoroughly to ensure compatibility with your specific use cases and guest operating systems. + +--- + +Certainly! Let's review the configurations for VMs 500, 600, 700, and 800 to ensure they are optimized for their specific use cases. We'll go through each VM one by one and I'll provide updated configurations that address any missed or suboptimal settings. + +VM 500 (Debian 12): +```bash +agent: 1 +bios: seabios +boot: order=scsi0;net0 +cores: 4 +cpu: host,flags=+pcid +hotplug: disk,network,usb +kvm: 1 +memory: 4096 +name: debian-12 +net0: virtio=BC:24:11:85:09:34,bridge=vmbr0,firewall=1 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-500-disk-0,size=64G,ssd=1,discard=on,iothread=1 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3692 +sockets: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` + +Optimizations: +- Added `bios: seabios` for compatibility with Debian. +- Adjusted boot order to prioritize disk and network. +- Enabled hotplug for disk, network, and USB devices. +- Set `scsihw: virtio-scsi-single` for better single-disk performance. +- Added `iothread=1` to the disk configuration for improved I/O performance. +- Changed `vga` to use `serial0` for console access and set `type=qxl` for potential graphical use. + +VM 600 (Ubuntu 22.04): +```bash +agent: 1 +bios: ovmf +boot: order=scsi0;net0 +cores: 8 +cpu: host,flags=+pcid +hotplug: disk,network,usb +kvm: 1 +memory: 8192 +name: ubuntu-22-04 +net0: virtio=BC:24:11:85:09:35,bridge=vmbr0,firewall=1 +net1: virtio=BC:24:11:3B:2E:95,bridge=vmbr1,firewall=1 +numa: 1 +numa0: memory=4096,hostnodes=0,cpus=0-3 +numa1: memory=4096,hostnodes=1,cpus=4-7 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-600-disk-0,size=128G,ssd=1,discard=on,iothread=1 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3693 +sockets: 1 +vga: serial0,type=qxl,memory=128 +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` + +Optimizations: +- Set `bios: ovmf` for better compatibility with Ubuntu 22.04. +- Enabled firewall on both network interfaces. +- Adjusted `vga` settings to allocate 128MB of video memory for improved graphical performance. +- Other optimizations similar to VM 500. + +VM 700 (Rocky Linux 9): +```bash +agent: 1 +bios: seabios +boot: order=scsi0;net0 +cores: 6 +cpu: host,flags=+pcid +hotplug: disk,network,usb +kvm: 1 +memory: 6144 +name: rocky-linux-9 +net0: virtio=BC:24:11:85:09:36,bridge=vmbr0,firewall=1 +net1: virtio=BC:24:11:3B:2E:96,bridge=vmbr1,firewall=1 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-700-disk-0,size=96G,ssd=1,discard=on,iothread=1 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3694 +sockets: 1 +vga: serial0,type=qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819c +``` + +Optimizations: +- Similar to VM 500, with adjusted resources and added firewall on the second network interface. + +VM 800 (Windows 10 VDI): +```bash +agent: 1 +bios: ovmf +boot: order=scsi0;net0 +cores: 4 +cpu: host,flags=+pcid,hidden=1 +hotplug: disk,network,usb +kvm: 1 +memory: 8192 +name: windows-10-vdi +net0: virtio=BC:24:11:85:09:37,bridge=vmbr0,firewall=1 +numa: 0 +onboot: 1 +ostype: win10 +scsi0: zfs-disk0:vm-800-disk-0,size=128G,ssd=1,discard=on,iothread=1 +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3695 +sockets: 1 +tablet: 1 +vga: serial0,type=qxl,memory=256 +vmgenid: 137fbfab-cc44-4246-a558-67369061819d +``` + +Optimizations: +- Set `cpu` flag `hidden=1` to hide virtualization from the guest for better compatibility. +- Enabled the `tablet` device for improved input handling in VDI scenarios. +- Allocated 256MB of video memory for better graphical performance. +- Other optimizations similar to previous VMs. + +These configurations include several optimizations based on the Proxmox best practices and the specific use cases for each VM. They ensure that the VMs are set up to leverage the available resources efficiently, have appropriate device and driver configurations, and are tailored to their respective operating systems and workloads. + +Feel free to review these configurations and let me know if you have any further questions or if there are any additional optimizations you would like to discuss. + +--- + +It appears that the error message "unable to find a serial interface" suggests that the serial device configuration might be missing or incorrect in the VM configuration. Let's ensure that the serial interface is correctly configured for VMs 500, 600, and 700. Below, I will provide fully optimized configurations for each VM, including the serial device setup and ensuring all best practices are followed. + +### VM 500 Configuration (Debian 12) +```bash +agent: 1 +balloon: 0 +boot: order=scsi0;net0 +bootdisk: scsi0 +cores: 4 +cpu: host,flags=+pcid +ide2: none,media=cdrom +kvm: 1 +memory: 4096 +name: debian-12 +net0: virtio=BC:24:11:85:09:34,bridge=vmbr0,firewall=1 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-500-disk-0,size=64G,ssd=1,discard=on +scsihw: virtio-scsi-pci +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3692 +sockets: 1 +startup: order=1,up=30 +vga: std +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` + +### VM 600 Configuration (Ubuntu 22.04) +```bash +agent: 1 +balloon: 0 +boot: order=scsi0;net0 +bootdisk: scsi0 +cores: 8 +cpu: host,flags=+pcid +kvm: 1 +memory: 8192 +name: ubuntu-22-04 +net0: virtio=BC:24:11:85:09:35,bridge=vmbr0,firewall=1 +net1: virtio=BC:24:11:3B:2E:95,bridge=vmbr1 +numa: 1 +numa0: memory=4096,hostnodes=0,cpus=0-3 +numa1: memory=4096,hostnodes=1,cpus=4-7 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-600-disk-0,size=128G,ssd=1,discard=on +scsihw: virtio-scsi-pci +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3693 +sockets: 1 +startup: order=2,up=30 +vga: qxl,memory=128 +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` + +### VM 700 Configuration (Rocky Linux 9) +```bash +agent: 1 +balloon: 0 +boot: order=scsi0;net0 +bootdisk: scsi0 +cores: 6 +cpu: host,flags=+pcid +kvm: 1 +memory: 6144 +name: rocky-linux-9 +net0: virtio=BC:24:11:85:09:36,bridge=vmbr0,firewall=1 +net1: virtio=BC:24:11:3B:2E:96,bridge=vmbr1 +numa: 0 +onboot: 1 +ostype: l26 +scsi0: zfs-disk0:vm-700-disk-0,size=96G,ssd=1,discard=on +scsihw: virtio-scsi-pci +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3694 +sockets: 1 +startup: order=3,up=30 +vga: qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819c +``` + +### Key Configurations: +- **Agent** is enabled to improve integration and management. +- **CPU** settings include using the host's CPU model (`cpu: host`) and enabling specific flags (`flags=+pcid`) to enhance performance and security. +- **Memory** is set according to each VM's workload requirements. +- **SCSI disk** settings are optimized with SSD emulation and discard options enabled for performance. +- **Serial interface** (`serial0: socket`) is configured for console access, which should + + resolve the issue of accessing the VM terminal. +- **VMGenID** provides a unique identifier for systems sensitive to underlying hardware changes, especially useful in Windows environments. +- **NUMA** settings for VM 600 optimize memory and CPU allocation across multiple nodes. +- **VGA** settings are adjusted to suit the graphical needs of the VMs, with QXL used where graphical performance is important. + +These configurations ensure that your VMs are optimized for performance, manageability, and security, adhering to best practices within the Proxmox VE environment. + +--- + +Here's the configuration for VM 800, which is optimized for a Windows 10 environment intended for Virtual Desktop Infrastructure (VDI) purposes. This setup adheres to the Proxmox VE best practices and includes specific configurations to enhance the performance and usability of a Windows-based system. + +### VM 800 Configuration (Windows 10 for VDI) +```bash +agent: 1 +balloon: 0 +bios: ovmf +boot: order=scsi0;net0 +bootdisk: scsi0 +cores: 4 +cpu: host,flags=+pcid +kvm: 1 +memory: 8192 +name: windows-10-vdi +net0: virtio=BC:24:11:85:09:37,bridge=vmbr0,firewall=1 +numa: 0 +onboot: 1 +ostype: win10 +scsi0: zfs-disk0:vm-800-disk-0,size=128G,ssd=1,discard=on +scsihw: virtio-scsi-single +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3695 +sockets: 1 +startup: order=4,up=30 +tablet: 1 +vga: qxl,memory=256 +vmgenid: 137fbfab-cc44-4246-a558-67369061819d +``` + +### Key Configurations for Windows 10 VDI: +- **Agent** (`agent: 1`): QEMU guest agent is enabled for enhanced integration and management. +- **BIOS** (`bios: ovmf`): Uses OVMF (UEFI BIOS) which is recommended for modern Windows installations, enabling features like secure boot. +- **CPU and Cores** (`cores: 4`, `cpu: host`): Uses 4 CPU cores and the host CPU model for optimal performance, necessary for a responsive desktop experience. +- **Memory** (`memory: 8192`): 8GB of RAM to ensure smooth operation of Windows 10 and any desktop applications. +- **Network** (`net0: virtio`): Uses VirtIO network drivers for the best network performance. +- **SCSI Controller** (`scsihw: virtio-scsi-single`): Optimizes disk operations with the high-performance VirtIO SCSI controller. +- **Disk** (`scsi0: zfs-disk0:vm-800-disk-0`): Configures a 128GB disk with SSD emulation and TRIM/DISCARD support to enhance disk I/O performance. +- **Serial Console** (`serial0: socket`): Enables serial console access for remote management. +- **Startup** (`startup: order=4,up=30`): Sets startup priority and delay to manage system boot sequence. +- **Tablet Device** (`tablet: 1`): Adds a tablet input device, improving the mouse pointer integration and experience when accessing through remote desktop protocols. +- **Video Adapter** (`vga: qxl,memory=256`): Allocates more video memory (256MB) and uses the QXL video driver for better performance in graphical applications. +- **VM Generation ID** (`vmgenid`): Provides a unique generation ID to manage changes in the virtual hardware, especially important for VMs that may undergo snapshots or clones. + +This configuration ensures that the Windows 10 VM is well-optimized for VDI applications, providing a robust and efficient virtual desktop environment within your Proxmox VE infrastructure. + +--- + +Creating an effective guide for managing VM configurations in Proxmox VE involves summarizing the best practices and key settings we've discussed. Here’s a comprehensive guide to help you configure and optimize virtual machines effectively: + +### 1. **General Best Practices** + - **Backup Configuration**: Always backup VM configurations and data before making significant changes. + - **Consistent Naming Conventions**: Use clear and consistent naming for VMs and storage to ease management. + - **Resource Monitoring**: Regularly monitor VM performance and resource utilization to adjust allocations and ensure optimal operations. + +### 2. **VM Configuration Best Practices** + - **Agent**: Always enable the QEMU guest agent (`agent: 1`) to improve management capabilities and performance monitoring. + - **CPU Settings**: + - `cores`: Assign cores based on the application need but beware of overallocation which can affect host performance. + - `cpu`: Use `host` to match the VM’s CPU type with the host for best performance. + - `sockets`: Define based on the physical CPU configuration of the host. + - **Memory**: Allocate memory based on the application requirements but ensure it does not exceed the host's available resources. + - **KVM**: Always enable (`kvm: 1`) to utilize hardware-assisted virtualization. + - **NUMA**: Use Non-Uniform Memory Access settings if the VM requires high-performance memory access and the host hardware supports it. + +### 3. **Disk Configuration** + - **SCSI Settings**: + - `scsihw`: Use `virtio-scsi-single` for single drive VMs for better performance. + - `discard`: Enable (`discard=on`) to allow TRIM/DISCARD commands for SSDs, which helps with space management and performance. + - `ssd`: Set (`ssd=1`) if using SSD storage to emulate accurate SSD behaviors. + - **Size**: Allocate disk size based on expected data growth to avoid frequent resizing which can impact performance. + +### 4. **Network Configuration** + - **Model**: Use `virtio` for network adapters for the best performance. + - **Bridges**: Connect VMs to appropriate bridges (`vmbr0`, `vmbr1`, etc.) configured on the host to organize network traffic efficiently. + +### 5. **Graphics and Video** + - **Video Type**: + - `vga`: Use `std` for standard applications; `qxl` for desktop environments or graphical applications to improve video performance. + +### 6. **Startup and Order** + - **Boot Order**: Configure boot devices and order to streamline VM startups (`boot: order=scsi0;ide2;net0`). + - **Onboot**: Set (`onboot: 1`) to automatically start VMs when the Proxmox host boots up. + +### 7. **Serial and Console Access** + - **Serial Console**: Use `serial0: socket` to access VM consoles through Proxmox’s integrated console features. + +### 8. **Additional Options** + - **UUID**: Each VM should have a unique `smbios1: uuid` to distinguish it in networks. + - **VM Generation ID** (`vmgenid`): Useful for systems like Windows that are sensitive to underlying virtual hardware changes. + +### 9. **Command-Line Management** + - **Create, Clone, and Manage VMs**: Familiarize with `qm` command-line tools (`qm create`, `qm set`, `qm clone`, etc.) for detailed management. + +### 10. **Documentation and Updates** + - **Keep Documentation Updated**: Regularly update your documentation as you adjust or expand your virtual environment. + - **Stay Informed on Updates**: Keep the Proxmox VE and guest OSes updated to ensure security and performance enhancements are applied. + +Implementing these practices will help you maintain a robust and efficient virtual environment in Proxmox VE, maximizing performance and stability across your VM deployments. + +--- + +Here are the updated configurations for VMs 500, 600, and 700, optimized for performance and based on best practices: + +VM 500 (Debian 12): +```bash +agent: enabled=1 +boot: c +bootdisk: scsi0 +cores: 4 +ide2: local:iso/debian-12.5.0-amd64-netinst.iso,media=cdrom,size=629M +cpu: host +kvm: 1 +memory: 4096 +name: debian-12 +net0: virtio=BC:24:11:85:09:34,bridge=vmbr0 +net1: virtio=BC:24:11:3B:2E:94,bridge=vmbr1 +onboot: 1 +scsi0: zfs-disk0:vm-500-disk-0,discard=on,size=64G,ssd=1 +scsihw: virtio-scsi-pci +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3692 +startup: order=1 +vga: qxl +``` + +root@whitebox:/etc/pve/qemu-server# cat 501.conf +```bash +agent: 1 +boot: order=scsi0;ide2;net0 +cores: 2 +cpu: x86-64-v2-AES +ide2: local:iso/debian-12.5.0-amd64-netinst.iso,media=cdrom,size=629M +memory: 2048 +meta: creation-qemu=8.1.5,ctime=1713509174 +name: debian12-1 +net0: virtio=BC:24:11:0A:EC:C6,bridge=vmbr0,firewall=1 +numa: 0 +ostype: l26 +scsi0: zfs-disk2:vm-501-disk-0,discard=on,iothread=1,size=64G +scsihw: virtio-scsi-single +smbios1: uuid=97afaa05-5ef4-48a5-afcd-54867b40eca0 +sockets: 1 +``` + +VM 600 (Ubuntu 22.04): +```bash +agent: enabled=1 +boot: c +bootdisk: scsi0 +cores: 8 +cpu: host +kvm: 1 +memory: 8192 +name: ubuntu-22-04 +net0: virtio=BC:24:11:85:09:35,bridge=vmbr0 +net1: virtio=BC:24:11:3B:2E:95,bridge=vmbr1 +numa: 1 +numa0: memory=4096,hostnodes=0,cpus=0-3 +numa1: memory=4096,hostnodes=1,cpus=4-7 +onboot: 1 +scsi0: zfs-disk0:vm-600-disk-0,discard=on,size=128G,ssd=1 +scsihw: virtio-scsi-pci +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3693 +startup: order=2 +vga: qxl,memory=128 +vmgenid: 137fbfab-cc44-4246-a558-67369061819b +``` + +VM 700 (Rocky Linux 9): +```bash +agent: enabled=1 +boot: c +bootdisk: scsi0 +cores: 6 +cpu: host +kvm: 1 +memory: 6144 +name: rocky-linux-9 +net0: virtio=BC:24:11:85:09:36,bridge=vmbr0 +net1: virtio=BC:24:11:3B:2E:96,bridge=vmbr1 +onboot: 1 +scsi0: zfs-disk0:vm-700-disk-0,discard=on,size=96G,ssd=1 +scsihw: virtio-scsi-pci +serial0: socket +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3694 +startup: order=3 +vga: qxl +vmgenid: 137fbfab-cc44-4246-a558-67369061819c +``` + +```bash +qm clone 500 501 --full --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 +``` + +Key changes and optimizations: + +1. Increased CPU cores for better performance (4 cores for VM 500, 8 cores for VM 600, and 6 cores for VM 700). +2. Set `cpu: host` to match the host's CPU model for optimal performance. +3. Explicitly enabled KVM hardware virtualization with `kvm: 1`. +4. Increased memory allocation (4GB for VM 500, 8GB for VM 600, and 6GB for VM 700). +5. Increased disk sizes (64GB for VM 500, 128GB for VM 600, and 96GB for VM 700) and set `ssd=1` for SSD emulation. +6. Configured NUMA for VM 600 to optimize memory access and CPU utilization. +7. Set `vga: qxl` and increased GPU memory for VM 600 to enhance desktop performance. +8. Adjusted startup order to prioritize VMs based on their importance. + +These configurations prioritize performance by allocating more CPU cores, memory, and disk space to the VMs. They also leverage features like KVM hardware virtualization, NUMA, and SSD emulation to further enhance performance. + +Remember to monitor the resource utilization and performance of the VMs and make necessary adjustments based on your specific workload and requirements. + +--- + +Here's a guide to help you manage your VMs using the qm command-line tool in Proxmox VE: + +1. Creating a VM: + Use the `qm create` command to create a new VM. Specify the VM ID, OS type, disk size, and other parameters. + Example: + ``` + qm create 100 --memory 1024 --net0 virtio,bridge=vmbr0 --bootdisk sata0 --ostype l26 --sockets 1 --cores 2 --sata0 local-lvm:32,format=qcow2 + ``` + +2. Configuring VM Hardware: + Use the `qm set` command to modify VM hardware settings such as CPU, memory, network, and disk. + Examples: + - Set CPU cores: `qm set 100 --cores 4` + - Set memory: `qm set 100 --memory 2048` + - Add a network device: `qm set 100 --net1 virtio,bridge=vmbr1` + - Add a disk: `qm set 100 --sata1 local-lvm:64,format=qcow2` + +3. Starting and Stopping VMs: + - Start a VM: `qm start 100` + - Stop a VM: `qm stop 100` + - Shutdown a VM gracefully: `qm shutdown 100` + - Reset a VM: `qm reset 100` + +4. Cloning and Templating: + - Clone a VM: `qm clone 100 101` + - Create a template from a VM: `qm template 100` + - Create a new VM from a template: `qm clone 100 102 --name "New VM"` + +5. Managing Snapshots: + - Create a snapshot: `qm snapshot 100 snap1` + - List snapshots: `qm listsnapshot 100` + - Restore a snapshot: `qm rollback 100 snap1` + - Delete a snapshot: `qm delsnapshot 100 snap1` + +6. Migrating VMs: + - Offline migration: `qm migrate 100 pve2 --online 0` + - Online migration: `qm migrate 100 pve2 --online 1` + +7. Backup and Restore: + - Backup a VM: `vzdump 100 --storage local --compress lzo` + - Restore a VM: `qmrestore 100 vzdump-qemu-100-2023_05_20-14_30_00.vma.lzo --storage local-lvm` + +8. Monitoring VM Performance: + - Show VM status: `qm status 100` + - Show VM configuration: `qm config 100` + - Show VM usage statistics: `qm monitor 100` + +9. Accessing VM Console: + - Access VM console: `qm terminal 100` + +10. Managing VM Disks: + - Move a VM disk to a different storage: `qm move-disk 100 scsi0 local-lvm` + - Resize a VM disk: `qm resize 100 scsi0 +10G` + +Remember to replace `100` with the actual VM ID you want to manage. + +These are just a few examples of what you can do with the `qm` command. For a complete list of available options and commands, refer to the qm manual page by running `man qm`. + +Always make sure to have proper backups of your VMs before performing any major operations. Regularly monitor your VMs' resource usage and performance to ensure optimal operation. + +Feel free to explore and experiment with different commands and options to effectively manage your VMs in Proxmox VE. + +--- + +### Legend for VM Configuration Parameters: +Here's a breakdown of each configuration parameter used in the VM settings, explaining their purpose and impact: + +- **agent**: Enables the QEMU guest agent which enhances the integration and management of the guest OS with the host. +- **boot**: Defines the boot order of the devices. +- **bootdisk**: Specifies the primary boot disk. +- **cores**: Number of CPU cores allocated to the VM. More cores can improve multitasking and application performance. +- **cpu**: Sets the CPU type; using `host` leverages the host CPU’s features for the best compatibility and performance. +- **ide2**: Specifies a secondary IDE device, typically used for CD-ROM drives or ISO images. +- **kvm**: Enables or disables the Kernel-based Virtual Machine (KVM), which provides hardware-assisted virtualization. +- **memory**: Amount of RAM allocated to the VM. +- **name**: Name of the VM, used for identification within Proxmox VE. +- **net0, net1**: Network interfaces for the VM. Using `virtio` drivers offers better network performance. +- **onboot**: Determines if the VM should automatically start when the host system boots. +- **scsi0**: Defines settings for the SCSI disk such as size, whether to use SSD emulation, and whether to allow TRIM/DISCARD operations. +- **scsihw**: Specifies the SCSI hardware type; `virtio-scsi-pci` is a high-performance virtual SCSI device. +- **serial0**: Configures serial devices, typically used for console access. +- **smbios1**: Sets the System Management BIOS (SMBIOS) information including the universally unique identifier (UUID). +- **startup**: Defines the startup behavior and order relative to other VMs. +- **vga**: Configures the video graphics adapter. Options like `qxl` are optimized for VMs that require better graphical performance. + +### VM 500 Configuration Details: +This virtual machine is configured to serve as a Debian 12 system with a focus on stable and efficient operation. It is designed to handle moderate workloads such as development environments, lightweight applications, and general server tasks. + +```bash +agent: enabled=1 # QEMU guest agent is enabled for improved integration and management. +boot: c # Boot priority is set to the primary SCSI disk. +bootdisk: scsi0 # Primary boot device is the first SCSI disk. +cores: 4 # The VM is allocated 4 CPU cores. +cpu: host # CPU type is matched to the host for optimal performance. +ide2: local:iso/debian-12.5.0-amd64-netinst.iso,media=cdrom,size=629M # ISO image for Debian installation. +kvm: 1 # KVM hardware virtualization is enabled. +memory: 4096 # 4GB of RAM is allocated to the VM. +name: debian-12 # Name of the VM for easy identification. +net0: virtio=BC:24:11:85:09:34,bridge=vmbr0 # First network interface using virtio driver on vmbr0. +net1: virtio=BC:24:11:3B:2E:94,bridge=vmbr1 # Second network interface using virtio driver on vmbr1. +onboot: 1 # VM is set to automatically start at boot. +scsi0: zfs-disk0:vm-500-disk-0,discard=on,size=64G,ssd=1 # Primary SCSI disk with 64GB, SSD emulation, and TRIM enabled. +scsihw: virtio-scsi-pci # High-performance SCSI controller. +serial0: socket # Serial console access through a socket. +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3692 # Unique identifier for the VM. +startup: order=1 # Startup order set to 1, indicating high priority. +vga: qxl # QXL video adapter for improved graphics performance. +``` + +### Explanation of VM 500's Optimized Settings: +- **CPU and Memory**: The allocation of 4 CPU cores and 4GB of RAM balances performance with resource efficiency, suitable for the expected workload. +- **Storage Configuration**: The use of a 64GB SSD-emulated disk with TRIM support enhances I/O performance, which is crucial for responsive system behavior. +- **Network Setup**: Dual networking interfaces ensure redundancy and potential segmentation (e.g., management vs. operational traffic). +- **Graphics**: The `qxl` video adapter is chosen to provide sufficient graphical capabilities, especially useful if the VM is accessed via a graphical console frequently. + +### VM 600 Configuration Details: +VM 600 is configured as an Ubuntu 22.04 system, optimized for higher workload capabilities such as development environments, applications requiring more computing power, and server tasks that benefit from increased RAM and CPU allocation. + +```bash +agent: enabled=1 # QEMU guest agent is enabled for improved integration and management. +boot: c # Boot priority is set to the primary SCSI disk. +bootdisk: scsi0 # Primary boot device is the first SCSI disk. +cores: 8 # The VM is allocated 8 CPU cores. +cpu: host # CPU type is matched to the host for optimal performance. +kvm: 1 # KVM hardware virtualization is enabled. +memory: 8192 # 8GB of RAM is allocated to the VM. +name: ubuntu-22-04 # Name of the VM for easy identification. +net0: virtio=BC:24:11:85:09:35,bridge=vmbr0 # First network interface using virtio driver on vmbr0. +net1: virtio=BC:24:11:3B:2E:95,bridge=vmbr1 # Second network interface using virtio driver on vmbr1. +numa: 1 # NUMA is enabled with specific configurations for optimized memory and CPU usage. +numa0: memory=4096,hostnodes=0,cpus=0-3 # First NUMA node configuration. +numa1: memory=4096,hostnodes=1,cpus=4-7 # Second NUMA node configuration. +onboot: 1 # VM is set to automatically start at boot. +scsi0: zfs-disk0:vm-600-disk-0,discard=on,size=128G,ssd=1 # Primary SCSI disk with 128GB, SSD emulation, and TRIM enabled. +scsihw: virtio-scsi-pci # High-performance SCSI controller. +serial0: socket # Serial console access through a socket. +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3693 # Unique identifier for the VM. +startup: order=2 # Startup order set to 2, indicating priority after VM 500. +vga: qxl,memory=128 # QXL video adapter with increased memory for improved graphics performance. +``` + +### VM 700 Configuration Details: +VM 700 is configured as a Rocky Linux 9 system, aimed at robust server tasks, with balanced CPU and memory resources to support a variety of server-based applications, including databases and application servers. + +```bash +agent: enabled=1 # QEMU guest agent is enabled for improved integration and management. +boot: c # Boot priority is set to the primary SCSI disk. +bootdisk: scsi0 # Primary boot device is the first SCSI disk. +cores: 6 # The VM is allocated 6 CPU cores. +cpu: host # CPU type is matched to the host for optimal performance. +kvm: 1 # KVM hardware virtualization is enabled. +memory: 6144 # 6GB of RAM is allocated to the VM. +name: rocky-linux-9 # Name of the VM for easy identification. +net0: virtio=BC:24:11:85:09:36,bridge=vmbr0 # First network interface using virtio driver on vmbr0. +net1: virtio=BC:24:11:3B:2E:96,bridge=vmbr1 # Second network interface using virtio driver on vmbr1. +onboot: 1 # VM is set to automatically start at boot. +scsi0: zfs-disk0:vm-700-disk-0,discard=on,size=96G,ssd=1 # Primary SCSI disk with 96GB, SSD emulation, and TRIM enabled. +scsihw: virtio-scsi-pci # High-performance SCSI controller. +serial0: socket # Serial console access through a socket. +smbios1: uuid=7ccd0747-63bb-4626-b971-5f0ea27a3694 # Unique identifier for the VM. +startup: order=3 # Startup order set to 3, indicating priority after VM 600. +vga: qxl # QXL video adapter for improved graphics performance. +``` + +### Explanation and Optimization: +- **CPU and Memory**: Both VMs are allocated higher resources compared to VM 500 to handle more intensive tasks. VM 600 has 8 cores and 8GB of RAM, while VM 700 has 6 cores and 6GB, reflecting their expected usage profiles. +- **NUMA Configuration for VM 600**: Specific NUMA settings optimize the performance by aligning CPU cores and memory to specific NUMA nodes, reducing latency and increasing efficiency in handling processes. +- **Storage Configurations**: Both VMs use ZFS-backed storage with SSD emulation and TRIM support, optimizing disk I/O operations, crucial for performance-sensitive applications. +- **Network and Graphics**: Both VMs use the `virtio` network model for better performance and `qxl` for video to support graphical applications effectively if needed. \ No newline at end of file diff --git a/tech_docs/webdev/eleventy_structure.md b/tech_docs/webdev/eleventy_structure.md new file mode 100644 index 0000000..9a3ae03 --- /dev/null +++ b/tech_docs/webdev/eleventy_structure.md @@ -0,0 +1,28 @@ +```markdown +my-eleventy-project/ +│ +├── _includes/ +│ ├── layouts/ +│ │ └── base.njk +│ └── partials/ +│ ├── header.njk +│ └── footer.njk +│ +├── media/ +│ ├── images/ +│ └── videos/ +│ +├── css/ +│ └── style.css +│ +├── js/ +│ └── script.js +│ +├── pages/ (or just place *.md files here) +│ ├── about.md +│ ├── projects.md +│ └── contact.md +│ +├── .eleventy.js +└── package.json +``` \ No newline at end of file diff --git a/tech_docs/webdev/webdev_training.md b/tech_docs/webdev/webdev_training.md new file mode 100644 index 0000000..2a3ebc3 --- /dev/null +++ b/tech_docs/webdev/webdev_training.md @@ -0,0 +1,168 @@ +## Feedback on Plans for Learning + +Your plans for learning are detailed and purposefully directed towards achieving your goal of becoming a web developer. Here are some refined suggestions for enhancing your strategy: + +- **Daily Practice:** Dedicate a minimum of 30 minutes daily to learn and practice. Consistent effort accumulates over time. +- **Community Engagement:** Seek help and share your progress in online communities such as [Stack Overflow](https://stackoverflow.com/) and [Reddit's r/webdev](https://www.reddit.com/r/webdev/) to foster learning through collaboration. +- **Patience:** Web development is intricate and may sometimes be frustrating. Remember, every expert was once a beginner. Persist with your efforts. + +## SMART Goals for Training Plan + +Refine your SMART goals as follows: + +- **Specific:** Craft a basic webpage using HTML and CSS, incorporating at least one CSS framework by the end of the sixth week. +- **Measurable:** Complete all assignments in your ongoing [mention the specific course name] web development course. +- **Achievable:** Allocate a fixed 30 minutes daily for web development learning. +- **Relevant:** Stay focused on acquiring skills pivotal to web development, keeping abreast with the latest industry trends. +- **Time-bound:** Finish your web development course within a span of six months, setting a steady pace for learning. + +## Weekly Training Schedule (Week 1) + +Below is a more structured weekly schedule with practical tasks: + +### Day 1 + +- **Objective:** Grasp the role and fundamental features of HTML in web development. +- **Resources:** + - [HTML Tutorial](https://www.w3schools.com/html/) + - [HTML Crash Course](https://www.theodinproject.com/lessons/foundations-introduction-to-html-and-css) + +### Day 2 + +- **Objective:** Learn about the structural elements of an HTML document, focusing on , , , and . +- **Resources:** + - [HTML Document Structure](https://www.w3.org/TR/html401/struct/global.html) + - [HTML Document Structure Tutorial](https://www.reddit.com/r/webdev/comments/q9f82u/i_made_a_detailed_walkthrough_of_the_odin/) + +### Day 3 + +- **Objective:** Acquaint yourself with common HTML tags used to format a webpage. +- **Resources:** + - [HTML Tags](https://www.w3schools.com/tags/tag_html.asp) + - [HTML Tags Tutorial](https://www.theodinproject.com/lessons/foundations-elements-and-tags) + +### Days 4-5 + +- **Practice:** Create a simplistic HTML webpage utilizing the tags learned. Share it with friends or online communities for feedback. +- **Resources:** + - [HTML Tutorial](https://www.w3schools.com/html/) + - [HTML Crash Course](https://www.theodinproject.com/lessons/foundations-introduction-to-html-and-css) + +## Tips for a Fruitful Training Plan + +Here are some actionable tips to augment your learning journey: + +- **Prioritize Tasks:** Utilize the Eisenhower Matrix or ABCDE method to focus on high-priority tasks, optimizing your learning path. +- **Breaks:** Regular short breaks can prevent burnout and enhance focus. Ensure to take breaks during your learning sessions. +- **Mentorship:** Seek a mentor through platforms such as [LinkedIn](https://www.linkedin.com/) or local web development communities. A mentor can provide constructive feedback and guidance. + +Remember to track your progress regularly to identify strengths and areas needing improvement. Wishing you the best in your web development learning journey! + +and the following training plan: + +## 24-Week Training Plan + +### Step 1: Divide your training into smaller, manageable sections. + +Break down your 24-week training plan into smaller, more manageable sections. For example, you could divide it into three 8-week phases, or four 6-week phases. + +### Step 2: Assign specific weekly topics and objectives. + +Once you have divided your training plan into sections, assign specific weekly topics and objectives. For example, in Week 1, you might focus on learning the basics of HTML and CSS. In Week 2, you might focus on building a simple webpage. + +### Step 3: Create a Google Calendar for your training plan. + +Create a new Google Calendar specifically for your 24-Week Training Plan. Set up all-day events for each week, and include reminders at the beginning of the week to help you stay focused. + +### Step 4: Use Todoist to break down weekly objectives into daily tasks. + +Use Todoist to break down your weekly objectives into daily tasks. Create a new project called "24-Week Training Plan" and set up sections for each week. + +### Step 5: Allocate time for learning, practicing, and reviewing your progress. + +Allocate time for learning new concepts, practicing, and reviewing your progress. Use time blocking or the Pomodoro Technique to allocate focused time to tasks and avoid multitasking. + +### Step 6: Prioritize tasks based on importance and urgency. + +Prioritize your tasks based on importance and urgency using the Eisenhower Matrix or ABCDE method. Focus on high-impact tasks first and address lower-priority tasks when time permits. + +### Step 7: Set up a Trello board for your training plan. + +Set up a Trello board for your 24-week training plan. Create lists for workflow stages (e.g., To Do, In Progress, Review, Completed), and add cards for tasks and objectives. Move cards between lists as you progress. + +### Step 8: Organize your learning materials in Google Drive. + +Organize your learning materials in Google Drive by creating folders for each week and adding documents, resources, and project files as needed. + +### Step 9: Use automation tools to automate repetitive tasks or sync data. + +Use automation tools like Zapier or Integromat to automate repetitive tasks or sync data between Google Calendar, Todoist, Trello, and Google Drive. + +### Step 10: Use browser extensions to quickly add tasks or cards. + +Use browser extensions like Todoist for Chrome or Trello for Chrome to quickly add tasks or cards without leaving your current webpage. + +### Step 11: Set SMART goals and track your progress. + +Set SMART goals for your training plan and track your progress regularly. Use tools like RescueTime or Clockify to monitor your time spent on tasks and identify areas for improvement. + +### Step 12: Schedule regular breaks and leisure activities. + +Schedule regular breaks and leisure activities to maintain a healthy work-life balance and prevent burnout. + +### Step 13: Collaborate with others and seek feedback. + +If you are working with others, use tools like Slack or Microsoft Teams to streamline communication and collaborate effectively. Schedule regular check-ins or meetings to discuss progress and share feedback. + +### Step 14: Periodically review and adjust your workflow. + +Periodically review your workflow and make adjustments based on your experiences, new tools, or changing priorities. Seek feedback from others who have successfully completed similar training programs or have expertise in the field. + +**Additional tips:** + +- Find a learning method that works best for you. Some people learn best by reading, while others learn best by watching or doing. +- Don't be afraid to ask for help. If you are struggling with a particular concept or task, reach out to a mentor, friend, or online community for assistance. +- Celebrate your successes. As you progress through your training plan, take the time to celebrate your accomplishments. This will help you stay motivated and on track. + +## 24-Week Training Plan + +**Week 1-4:** + +- Introduction to web development history and terminology +- HTML fundamentals +- CSS fundamentals +- Web design principles + +**Week 5-10:** + +- CSS frameworks (e.g., Bootstrap, Tailwind CSS) +- JavaScript fundamentals +- DOM manipulation +- Asynchronous JavaScript +- Web accessibility and performance + +**Week 11-14:** + +- Svelte framework fundamentals +- State management and routing in Svelte +- Building small projects with Svelte +- Website project planning and collaboration + +**Throughout the plan:** + +- Allocate adequate time for practice and project work +- Schedule regular breaks and leisure activities + +**Additional suggestions:** + +- Use a learning management system (LMS) +- Join a study group or online community +- Seek out mentors or coaches + +**Tips for success:** + +- Be dedicated and hardworking +- Focus on learning the core concepts +- Practice regularly +- Build projects to apply your skills +- Get feedback from others diff --git a/tech_docs/webdev/wp_kadence_wireframe.md b/tech_docs/webdev/wp_kadence_wireframe.md new file mode 100644 index 0000000..21e7039 --- /dev/null +++ b/tech_docs/webdev/wp_kadence_wireframe.md @@ -0,0 +1,291 @@ +# **Homepage Structure Using Kadence Blocks** + +--- + +## **Hero Section** + +### **Group:** Hero Introduction + +- **Block:** Row Layout (Single Row) +- **Background:** Video or Image (16:9 aspect ratio) with `alt` text including primary keyword. +- **Block:** Advanced Heading + - **Content:** Main heading with primary keyword. Secondary keyword in the subheading. +- **Meta Description:** Informative description with the primary keyword. +- **Block:** Button + - **Action:** "Book an Event" with a descriptive `aria-label` for accessibility. + +**Kadence Row Layout (Single Row)** + +- **Background Settings**: Choose from color, gradient, image, or video. If using an image or video, ensure it's web-optimized. +- **Padding & Margin**: Adjust to create space and center content. + +**Kadence Advanced Heading** + +- **Font Settings**: Use H1 for main heading. Adjust font size, weight, and line-height for clarity. +- **Content**: Main heading with primary keyword. Add a secondary keyword in the subheading. + +**Kadence Button** + +- **Design**: Use Kadence’s styling options for button size, color, border-radius, and hover effects. +- **Link**: "Book an Event". Ensure the link directs to the booking section or page. + +- **Row Layout (Single Row)**: This is appropriate for creating a hero section, as Kadence's Row Layout provides flexibility for both background and content settings. +- **Advanced Heading**: The Advanced Heading block in Kadence is suitable for hero section headings due to its extensive customization options. +- **Button**: The standard Gutenberg Button block is ideal for CTAs, especially in hero sections. +- **Video**: Consider adding a short video showcasing the chef's work for an engaging visitor experience. + + > - Provides an immediate visual impact and introduces visitors to the chef's main offering. + > - Call to Action: "Book an Event" + > - Ensure the hero image loads quickly to reduce bounce rate. + > - Position the main heading and CTA (Call to Action) button above the fold for immediate visibility. + > - Use high-contrast colors for text and CTA to make them stand out against the background. + +--- + +## **About the Chef** + +### **Group:** Chef's Background + +- **Block:** Row Layout (Two Columns: 1/3 + 2/3) + - **Column 1:** + - **Block:** Image (1:1 or 3:2 aspect ratio) with `alt` text describing the chef. + - **Column 2:** + - **Block:** Advanced Heading + - **Content:** Chef's name (with keyword) and bio. Include chef's relevant keywords in the bio. + - **Links:** Chef's social media profiles. + +**Kadence Row Layout (Two Columns: 1/3 + 2/3)** + +- **Column Gap**: Adjust to create a harmonious space between image and text. + +**Kadence Image Block (1/3 column)** + +- **Image Settings**: Choose a high-quality portrait of the chef and ensure it's web-optimized. +- **Alt Text**: Briefly describe the chef. + +**Kadence Advanced Heading (2/3 column)** + +- **Font Settings**: Use H2 tag and adjust font size, weight, and line-height. +- **Content**: Chef's name followed by a concise bio, ensuring a natural flow with keyword integration. + +**Kadence Social Links (2/3 column)** + +- **Design**: Customize icon size and colors. +- **Links**: Directly link to the chef's profiles on different platforms. + +- **Row Layout (Two Columns: 1/3 + 2/3)**: This layout is effective for combining an image with textual content. The 1/3 column is suitable for the chef's image, while the 2/3 column can house the textual content. +- **Social Media**: Include links to the chef's social media profiles for increased engagement. + +> - A brief introduction to the chef can help personalize the website and establish credibility. +> - Include a short bio and portrait. +> - Use a high-quality image of the chef. This helps in establishing trust. +> - Ensure the bio is concise and highlights the chef's achievements or specialties. +> - Include relevant keywords in the bio without keyword stuffing. +> - Use schema markup for the chef's bio to improve search engine visibility. + +--- + +## **Services Offered** + +### **Group:** Services Overview + +- **Block:** Row Layout (Three Columns) + - **Each Column:** + - **Block:** Info Box with image (16:9 or 4:3 aspect ratio) with `alt` text (include service-specific keywords). + - **Content:** Heading with service-specific keywords, concise description, and Button ("Learn More" or "Book Now" with `aria-label`). + +**Kadence Row Layout (Three Columns)** + +- **Column Gap**: Adjust for even spacing between columns. Ensure columns stack on mobile for responsiveness. + +**Kadence Info Box (Each Column)** + +- **Image**: Use service-specific images that are web-optimized. +- **Heading**: Integrate service-specific keywords. Adjust font for clarity. +- **Description**: Provide concise service details. +- **Button**: Link to detailed service pages or booking options, using Kadence button styling. + +- **Row Layout (Three Columns)**: This layout is suitable for showcasing multiple services side by side. For mobile views, ensure that the columns stack for responsiveness. +- **Info Box**: The Info Box block in Kadence is versatile and combines an image, heading, and button, making it ideal for service descriptions. + +> - Showcase the primary services to give visitors an idea of what the chef specializes in. +> - Use concise descriptions and visuals. +> - Call to Actions: "Learn More" or "Book Now" +> - Highlight the most popular or signature services to capture interest. +> - Use clear and descriptive alt texts for service images. +> - Include CTAs within each service to drive user action. + +--- + +## **Testimonials Slider** + +### **Group:** Client Testimonials + +- **Block:** Testimonial with Ratings + - **Content:** Rotating testimonials with name, designation, company, testimonial text, and ratings. + - **Link:** "Read All Testimonials" with `title` attribute for more context. + +**Kadence Testimonial Carousel** + +- **Layout**: Choose a design that allows for image, name, designation, and testimonial text. +- **Images**: Use uniform size, web-optimized images of the testimonial giver. +- **Content**: Display a curated list of impactful testimonials. +- **Navigation**: Adjust arrow and dot settings for user-friendly navigation. + +- **Testimonial**: The Testimonial block provided by Kadence is apt for rotating testimonials and can also display associated images. +- **Ratings**: Integrate a rating system for testimonials to enhance credibility. + +> - Social proof is powerful. Displaying a few select testimonials can build trust. +> - Call to Action: "Read All Testimonials" +> - Display testimonials with photos to increase credibility. +> - Rotate the most impactful testimonials. +> - Ensure the slider doesn’t move too fast, allowing users to read comfortably. +> - Use schema markup for testimonials to increase their visibility in search engines. + +--- + +## **Upcoming Events/Classes** + +### **Group:** Events Schedule + +- **Block:** Row Layout (Single Row) + - **Block:** Advanced Button (for viewing all events with `title` attribute) + - **Block:** Calendar with Link to Detailed Events Page + +**Kadence Advanced Button** + +- **Design**: Customize size, color, and hover effects. +- **Link**: Direct to a page with more comprehensive event details. + +**Embed Block** + +- **Integration**: Embed Google Calendar or another interactive system. Ensure it's styled to match the site's theme. + +- **Calendar Integration**: Since neither Kadence nor Gutenberg provides a dedicated "Calendar" block, consider integrating Google Calendar, especially if it's linked with Calendly on the backend. +- **Detailed Events Page**: Include a link to a page with more comprehensive event details. + +> - Inform visitors of any upcoming events or classes they can attend or book. +> - Call to Action: "View All Events" +> - Highlight limited-time or special events to create a sense of urgency. +> - Ensure the calendar is interactive and mobile-responsive. +> - Optimize event descriptions with relevant keywords. +> - Use schema markup for events to improve search engine visibility. + +--- + +## **Gallery** + +### **Group:** Image Gallery + +- **Block:** Gallery with Lightbox + - **Content:** Images from past events/classes with descriptive `alt` texts. + - **Caption:** Write captions for each image with keywords. + +**Kadence Gallery Block** + +- **Layout**: Choose between grid or masonry layout. Enable lightbox feature for enhanced image viewing. +- **Images**: Upload web-optimized images with consistent aspect ratios. +- **Captions**: Provide concise descriptions with keyword integration. + +- **Gallery Block**: Kadence's Gallery block is recommended due to advanced features like lightbox and masonry grid options. +- **Lightbox**: Implement a lightbox feature for an enhanced viewing experience. + +> - A visual representation of past events, dishes, or classes can be enticing. +> - Call to Action: "View Full Gallery" +> - Optimize images for quick loading and mobile responsiveness. +> - Use descriptive alt texts for each image for SEO benefits. +> - Consider adding a lightbox feature for a larger view of images upon clicking. + +--- + +## **Newsletter Signup** + +### **Group:** Newsletter Subscription + +- **Block:** Row Layout (Single Row) + - **Block:** Advanced Heading (for the section's title with primary or secondary keyword) + - **Subtitle:** Offer such as "Sign up and receive a 10% discount on your next booking or a free recipe." + - **Block:** Form (with `aria-label` for the email input field and "Subscribe" button). + +**Kadence Form Block** + +- **Fields**: Add fields for name and email. Customize placeholder texts. +- **Button**: Style the "Subscribe" button using Kadence settings. +- **Notifications**: Set up email notifications for each signup. + +- **Forms**: While Kadence offers a form block, for specialized forms, especially newsletter sign-ups and detailed contact forms, consider linking to Google Forms for more functionality and flexibility. + +> - Engage visitors by offering them a chance to stay updated with news, recipes, or special offers. +> - Call to Action: "Subscribe" +> - Position this near the footer but ensure it's visually distinct. +> - Make the sign-up process simple with minimal fields. +> - Ensure GDPR compliance if collecting emails from European visitors. + +--- + +## **Contact & Booking** + +### **Group:** Contact Details + +- **Block:** Row Layout (Two Columns) + - **Column 1:** + - **Block:** Form (with fields for name, email, event type, date, and message, all with proper `aria-label` attributes for accessibility). + - **Column 2:** + - **Block:** Advanced Heading (for contact details, include keywords like "Contact [Chef's Name] for Corporate Cooking Events"). + +**Kadence Row Layout (Two Columns)** + +- **Column Gap**: Adjust to create an even space between form and contact details. + +**Kadence Form Block (1/2 column)** + +- **Fields**: Include fields for name, email, event type, date, and message. Customize placeholders. +- **Button**: Style the "Submit" button using Kadence settings. + +**Kadence Advanced Heading (1/2 column)** + +- **Content**: Display contact details with keyword integration, ensuring natural readability. + +> - Provide an easy way for visitors to reach out or book an event. +> - Call to Action: "Submit" +> - Make the form fields intuitive and easy to fill. +> - Use clear error messages and confirmations. +> - Ensure the form is mobile-responsive and loads quickly. +> - Consider adding a map showing the chef's location or primary service area. + +--- + +## **Footer** + +### **Group:** Footer Information + +- **Block:** Row Layout (Multiple Rows) + - **Row 1:** + - **Block:** Navigation Menu (with `title` attributes for each link for more context). + - **Row 2:** + - **Block:** Icon List (for social media icons with proper `aria-label` attributes). + - **Row 3:** + - **Block:** Advanced Heading (for contact information with Schema Markup for local SEO). + +**Kadence Navigation Menu** + +- **Design**: Create a simple footer menu with essential links. + +**Kadence Icon List** + +- **Design**: Customize icon sizes, colors, and hover effects. +- **Links**: Directly link icons to respective social media profiles. + +**Kadence Advanced Heading** + +- **Content**: Display contact information and any necessary disclaimers. + +- **Navigation Menu**: Utilize Kadence's Navigation block for a more customized menu experience in the footer. +- **Mobile Responsiveness**: Ensure that all blocks, especially those with columns, are set to stack or rearrange appropriately on mobile views. + +> - Include essential links, contact information, and social media icons. +> - Keep the footer organized with clear categories or columns. +> - Ensure links are easy to click, especially on mobile devices. +> - Include an SEO-friendly sitemap link in the footer to aid search engine crawling. + +---