There are many online documents describing RAC Configuration requirements and this post is no way different. As I have started working on RAC, I am making a note for myself in case I need to refresh and hence is this posting.
1. Input Parameters
- General Information:
- SYS password
- Type of Database (Single Instance, RAC, Exadata Single Instance, Exadata RAC)
- Oracle Home (ORACLE_HOME)
- Environment (RND/QAP/PROD/UAT)
- Database Configuration:
- OSI Primary/Standby/RAC Primary/Standby/Exadata
- DB Recovery File Destination Size
- SGA, PGA (System Global Area, Program Global Area)
2. Listener Configuration
- SQL*Net Configuration:
- Check and validate
SQLNET.ORA
parameters:- Ensure encryption in-flight is enabled (e.g.,
AES256
)
- Ensure encryption in-flight is enabled (e.g.,
- Check and validate
- Listener Configuration:
Single Instance (OSI):
- Create
LISTENER.ORA
for OSI - Add listener entry for primary database
- SSL wallet configuration for secure communication
- Ensure appropriate authentication protocol compatibility
- Create
RAC Configuration:
- Active/duplicate listener entries
- Add/Remove listener entries as needed
- Validate SQL*Plus connection via TLS (Test successful TLS connection from one client)
3. Database Cleanup
- Oracle Spatial (MDSYS):
- Drop Oracle Spatial if required during provisioning
4. Initialization Parameters
Mandatory Parameters:
- Flashback (Enable/Disable as required)
- Block Change Tracking (Enable/Disable as required)
- Archive Log Mode (Verify for all environments, except RND)
- Log Archive Destinations (
log_archive_dest_2
,log_archive_dest_8
)
Resource Management:
- Set resource manager plan and limits
- Define CPU count allocation
- Configure Resource Manager CPU allocation for workloads
Database Size Adjustments:
- Increase
SYSAUX
tablespace size to 5GB (or as required)
- Increase
5. Oracle Environment Configuration
- /etc/oratab Configuration:
- Ensure the database entry in
/etc/oratab
is updated withY
for automatic startup
- Ensure the database entry in
6. User Creation & Configuration
Standard User Creation:
- DBA user
- Auditor user
- Application user
Enterprise User Security (EUS) Integration:
- Automate user mapping (e.g., sys password synchronization with EUS)
Unset Dispatchers:
- Remove any unnecessary dispatcher configurations
7. Transparent Data Encryption (TDE)
- TDE Configuration:
- Configure TDE for encryption of data at rest
- SSL wallet setup for TDE
8. Data Pump Job (For RND Environment)
- Job Configuration:
- Create a Data Pump job for automatic export/import tasks
- Schedule job via
crontab
for regular backups or migrations in RND environment
9. Backup Configuration
- Backup Strategy:
- Configure Oracle RMAN backups
- Schedule automated backups for various environments
10. Oracle Enterprise Manager (OEM)
- Target Configuration:
- Add the newly provisioned database to Oracle Enterprise Manager (OEM) for centralized management and monitoring
11. Onboarding Users
- User Management:
- Configure user access and roles post database creation
- Implement role-based access control (RBAC) for different users (DBA, Auditor, Application)
12. Service Creation (For OSI and RAC)
- Service Configuration:
- For OSI:
- Create necessary database services (e.g.,
mydb
service) - Add triggers for service management (startup/shutdown)
- Create necessary database services (e.g.,
- For RAC:
- Create services for load balancing and failover
- For OSI:
13. Standby Database Configuration (To Be Revisited Later)
- Note:
- The process for configuring standby databases (physical/logical) will be addressed at a later stage.
Summary of Key Areas for Automation:
- Database Environment Setup
- Proper input parameter configuration (database type, environment, etc.)
- Listener and SQL*Net configurations
- Database Initialization
- Key initialization parameters (flashback, archive logs, resource management, etc.)
- User and Security Configuration
- Creation of standard users, EUS mapping, TDE setup
- Backup and Monitoring
- RMAN backup configuration, Data Pump job for RND, OEM target addition
- Service and High Availability Configuration
- Service creation and configuration for both OSI and RAC environment
Capacity planning questionnaire
1. Data Volume and Growth
What is the yearly volume of data?
- Provide the volume in terms of MB/GB/TB per year.
What is the expected yearly growth rate of data?
- Provide percentage growth (e.g., 20% per year).
Please provide this information in a tabular format for monthly projections.
2. Data Access Patterns
What is the daily access pattern (CRUD operations)?
- Provide the details in terms of:
- Read Operations
- Insert Operations
- Update Operations
- Delete Operations
- Please provide counts for each operation type on a daily basis.
- Provide the details in terms of:
What is the peak concurrency factor (99th percentile) for CRUD operations?
- Define the peak number of concurrent transactions for each operation (e.g., peak inserts, updates, reads, deletes).
What is the workload type for this application?
- Real-time or Batch processing?
3. Data Retention and Archiving
What is the data retention period for the application data?
- How long will the data be retained in the database (e.g., 1 year, 5 years, permanent)?
Will data be archived after the retention period?
- If so, how will data be archived (e.g., to an external system, compressed, etc.)?
What is the size of the archived data per year (in GB/TB)?
4. Response Time and Performance
What is the expected average response time for database queries?
- Provide the target response time for typical queries (e.g., 1 second, 2 seconds).
Are there any performance SLAs?
- For example, query response time under peak load or during business-critical hours.
5. Query and Data Access
How far back in time are queries expected to go?
- Most queries retrieve data for how many months, days, or years?
- Does this differ by type of query (e.g., operational vs. reporting queries)?
What is the expected data retrieval frequency for the last N months/years of data?
- E.g., queries frequently access data from the last 3 months, or 1 year.
6. Data Migration and Integration
Are there any plans to migrate data from an existing data store to this new Oracle database?
- If yes, please provide volumetric details for the data migration.
What is the total size of data to be migrated?
- Please provide data size details in GB or TB, including any associated metadata.
Will data need to be synchronized during the migration (e.g., delta migration, real-time synchronization)?
7. Database Transactions and Load
What is the average number of read requests per day?
- Also, provide the average read requests per second.
What is the average number of write requests per day?
- Also, provide the average write requests per second.
What is the peak number of read requests per day (peak load)?
- Also, provide the peak read requests per second during high usage times.
What is the peak number of write requests per day (peak load)?
- Also, provide the peak write requests per second during high usage times.
8. Current Technology and Architecture
What is the current data store technology and version?
- Provide details of the existing database (e.g., MySQL, MSSQL, etc.).
What is the overall technology stack for the application?
- Database platform, application framework, middleware, etc.
How many application instances will connect to the Oracle database?
- This includes web servers, app servers, batch processes, etc.
9. Application Criticality and Availability
What is the impact on end users if the application is unavailable?
- Does the application impact business operations, customer experience, etc.?
Is this a Tier-1 application?
- Define if the application is business-critical (e.g., 24/7 uptime requirement, high availability).
What is the expected uptime SLA for this application?
- E.g., 99.9%, 99.99% uptime.
10. Security and Compliance
Please provide the data classification details (e.g., sensitive, confidential, public).
- Does the application handle sensitive information (e.g., PII, PCI, HIPAA)?
Are there any encryption requirements for data at rest or in transit?
- For example, TDE (Transparent Data Encryption) or SSL/TLS encryption.
Are there any non-english characters stored in the database?
- E.g., multi-lingual data storage, special characters, or non-ASCII text.
11. Future Growth and Scalability
What is the anticipated growth in users or transactions in the next 1-3 years?
- What is the expected user base or transaction volume increase?
What are the scalability plans for the application?
- Will the application scale horizontally (e.g., adding more app servers, Oracle RAC) or vertically (e.g., larger instance size, more CPU/RAM)?
Additional Considerations
Database Failover and High Availability:
- Are there any high availability (HA) or disaster recovery (DR) requirements, such as Oracle RAC, Data Guard, or backup solutions?
Backup and Recovery Requirements:
- What are the backup window and recovery objectives (e.g., RTO, RPO)?
Compliance and Auditing:
- Are there any auditing requirements (e.g., user access logs, query auditing) for compliance purposes?
== App
1. Application Overview
- What is this application about?
- Description: Provide a brief overview of the application, its purpose, and the business function it supports.
- Expected Output Sample:
"This application is a customer relationship management (CRM) system for tracking interactions and sales data across multiple customer channels. It integrates with multiple external data sources, performs real-time analytics, and generates personalized reports for sales teams."
2. Transaction Types and Characteristics
- What type of transactions does the application handle?
- Description: Provide a detailed description of the types of transactions (e.g., financial, e-commerce, real-time data processing, batch processing) handled by the application.
- Expected Output Sample:
"The application processes transactions related to customer order placements, payment processing, inventory updates, and order shipment tracking. It also generates daily summary reports for management. Transactions are both real-time and batch-oriented, with some overnight data aggregation tasks."
3. Limiting Impact on EXADATA
- How will the application limit its impact on the EXADATA environment?
- Description: Provide the strategies in place to minimize resource consumption, such as optimizing queries, load balancing, or using features like Oracle Real Application Clusters (RAC), database partitioning, or other performance-enhancing technologies.
- Expected Output Sample:
"The application will implement connection pooling to reduce the number of direct connections to EXADATA, ensuring that the database handles fewer, more efficient connections. Additionally, SQL queries will be optimized for performance, with indexing on critical tables and partitioning of large datasets (e.g., sales data). We will also use Oracle RAC to distribute load evenly across multiple nodes."
4. Use of Connection Pooling
- Are you going to use connection pooling?
- Description: Specify if connection pooling will be implemented and how it will be managed (e.g., using an external connection pool, built-in Oracle connection pooling, etc.).
- Expected Output Sample:
"Yes, connection pooling will be implemented using Oracle's built-in connection pool (Universal Connection Pool - UCP) to ensure efficient management of database connections. This approach will reduce connection overhead and minimize resource contention on the EXADATA system. Additionally, we will configure pool sizes based on expected peak load times."
5. Determination of Capacity
- How did you determine the capacity requirements (CPU, RAM, and storage)?
- Description: Explain the process and methodology used to estimate the required resources for CPU, RAM, and storage. Were any load tests or benchmarks conducted? Were historical data or application profiling used?
- Expected Output Sample:
"The capacity requirements were determined based on the expected transaction volume and performance SLAs. We conducted load testing with the application’s expected user load and transaction rates to simulate peak loads. CPU and RAM requirements were derived based on historical performance data from the legacy system, while storage requirements were estimated based on expected data growth (e.g., 5GB/day). We also accounted for a 20% buffer for future growth and overhead."
6. CPU, RAM, and Storage Requirements Sources
- Where did the CPU, RAM, and storage requirements come from?
- Description: Provide a detailed explanation of where the specific requirements for CPU, RAM, and storage were sourced from (e.g., application profiling, performance monitoring, vendor recommendations, testing results).
- Expected Output Sample:
"CPU and RAM requirements were derived from performance benchmarks and simulations using test workloads based on historical data from the existing system. For storage, the calculation was based on an estimated data growth rate of 5GB/day, plus an additional 10TB for transactional logs and backups over the next three years. The storage requirements were validated by consulting with Oracle support for best practices and reviewing similar deployment scenarios."
7. Peak Resource Usage
- What are the expected peak CPU, RAM, and storage usage values?
- Description: Provide peak values for CPU, RAM, and storage, particularly during high usage periods (e.g., end of month, high traffic periods, heavy reporting times).
- Expected Output Sample:
"Peak CPU usage is expected to reach 80% during month-end reporting, with up to 16 cores utilized. RAM usage could peak at 256GB during heavy real-time analytics processing. Storage usage is projected to peak at 100TB due to high-frequency transactional data and backups."
8. Growth Projections and Resource Expansion
- What is the expected rate of growth for CPU, RAM, and storage over the next 1-3 years?
- Description: Provide estimates for resource growth over the next 1 to 3 years based on expected business expansion, transaction volume growth, or application enhancements.
- Expected Output Sample:
"We expect CPU usage to grow by 20% annually as the number of concurrent users increases. RAM usage is anticipated to grow by 15% per year, mainly due to the increasing complexity of queries and reporting workloads. Storage is projected to grow by 25% annually, driven by a combination of transaction data, backup growth, and new features added to the application."
9. Performance and Load Testing
- Have you conducted any performance/load testing to determine the infrastructure requirements?
- Description: Provide details on any testing done to validate the estimated capacity needs, including testing methodologies (e.g., load testing, stress testing) and results.
- Expected Output Sample:
"Yes, we conducted performance testing using load simulation tools to replicate the expected user load and transaction volumes. The tests were designed to stress the system with peak transaction rates and concurrent users. We also conducted stress testing to identify potential bottlenecks, which helped us size the CPU, RAM, and storage requirements appropriately."
10. Oracle Database Features and Impact on EXADATA
- Are you using any Oracle-specific features (e.g., Oracle RAC, Exadata storage optimizations, Oracle Data Guard, etc.) to optimize database performance?
- Description: Identify any Oracle features that are expected to impact the database infrastructure, particularly in terms of resource usage on EXADATA.
- Expected Output Sample:
"Yes, we are using Oracle Real Application Clusters (RAC) for high availability and load balancing, which will distribute the workload across multiple EXADATA nodes. We are also leveraging Exadata storage optimizations such as Hybrid Columnar Compression (HCC) to reduce the storage footprint and improve query performance. Additionally, we will implement Oracle Data Guard for disaster recovery."
11. Impact on EXADATA
- How will you mitigate the impact of the application on EXADATA's resources (CPU, memory, and storage)?
- Description: Provide details on how the application is designed to limit excessive resource consumption on EXADATA, such as through query optimization, load balancing, and using Exadata-specific features.
- Expected Output Sample:
"To mitigate impact, we will ensure that all queries are optimized, particularly complex joins and subqueries. Additionally, we will utilize partitioning to improve query performance on large tables. Connection pooling will limit the number of concurrent database connections, reducing memory overhead. We will also leverage Exadata's Flash Cache for faster access to frequently queried data, ensuring minimal impact on storage performance."
Summary of Output Samples
Question | Expected Output |
---|---|
What is this application about? | CRM system for sales and customer data tracking. |
What type of transactions? | Order placement, payment processing, inventory updates, report generation. |
How will you limit impact on EXADATA? | Connection pooling, SQL query optimization, use of Oracle RAC, data partitioning. |
Will you use connection pooling? | Yes, using Oracle's UCP to manage connections efficiently. |
How did you determine capacity? | Load testing, transaction volume simulation, historical data analysis. |
Where did the CPU, RAM, and storage come from? | Benchmarks, performance data from legacy system, storage growth rate estimates. |
Peak CPU usage? | 80% during month-end, using 16 cores. |
Growth projections? | CPU: 20% growth, RAM: 15%, Storage: 25% annually. |
Did you conduct load testing? | Yes, using simulation tools to stress-test with peak users and transaction rates. |
Oracle features impacting EXADATA? | RAC for load balancing, Exadata optimizations (HCC, Flash Cache). |
collect_files.sh
#!/bin/bash
# Define the output file
output_file="file_structure.txt"
# Clear the previous output (if any)
> "$output_file"
# Function to check if a file is a text file (not binary)
is_text_file() {
# Use the 'file' command to check file type
file_type=$(file --mime-type -b "$1")
if [[ "$file_type" == text/* ]]; then
return 0 # It's a text file
else
return 1 # It's a binary file
fi
}
# Function to resolve absolute path in a portable way
get_absolute_path() {
# If the system is macOS, resolve the absolute path manually
if [[ "$(uname)" == "Darwin" ]]; then
# Resolve path manually for macOS
dir_name=$(dirname "$1")
base_name=$(basename "$1")
full_path=$(cd "$dir_name" && pwd)/"$base_name"
echo "$full_path"
else
# For Linux or other systems, use realpath or readlink
realpath "$1" 2>/dev/null || readlink -f "$1"
fi
}
# Function to traverse directories and collect file info
collect_files() {
for file in "$1"/*; do
if [ -d "$file" ]; then
# It's a directory, recursively call the function
collect_files "$file"
elif [ -f "$file" ]; then
# It's a file, check if it's a text file
if is_text_file "$file"; then
# It's a text file, collect its details
file_name=$(basename "$file")
# Resolve absolute path in a portable way
file_path=$(get_absolute_path "$file")
file_content=$(cat "$file")
echo "File: $file_name" >> "$output_file"
echo "Path: $file_path" >> "$output_file"
echo "Content:" >> "$output_file"
echo "$file_content" >> "$output_file"
echo "----------------------" >> "$output_file"
else
echo "Skipping binary file: $file" >> "$output_file"
fi
fi
done
}
# Start from the current directory
collect_files "."
echo "File structure saved to $output_file."
Script 2: rebuild_files.sh
#!/bin/bash
# Define the input file (file structure)
input_file="file_structure.txt"
# Check if the input file exists
if [ ! -f "$input_file" ]; then
echo "Error: File structure file not found!"
exit 1
fi
# Function to rebuild the file structure
rebuild_files() {
while IFS= read -r line; do
if [[ "$line" =~ ^File:\ (.*) ]]; then
file_name="${BASH_REMATCH[1]}"
elif [[ "$line" =~ ^Path:\ (.*) ]]; then
file_path="${BASH_REMATCH[1]}"
elif [[ "$line" =~ ^Content:$ ]]; then
# Start reading file content
content=""
while IFS= read -r content_line && [[ ! "$content_line" =~ ^---------------------- ]]; do
content="$content$content_line"$'\n'
done
# Create directory if it doesn't exist
dir_name=$(dirname "$file_path")
mkdir -p "$dir_name"
# Create the file and write content
echo -n "$content" > "$file_path"
echo "Recreated file: $file_path"
fi
done < "$input_file"
}
# Rebuild files and directories
rebuild_files
echo "File structure rebuilt from $input_file."
No comments:
Post a Comment