Sunday, December 11, 2011

RAC configuration requirements

There are many online documents describing RAC Configuration requirements and this post is no way different. As I have started working on RAC, I am making a note for myself in case I need to refresh and hence is this posting.


#!/bin/ksh

# Set the Oracle environment variables (customize these as needed)
export ORACLE_HOME=/path/to/oracle_home
export ORACLE_SID=your_oracle_sid
export PATH=$ORACLE_HOME/bin:$PATH

# Define Oracle username and password (preferably use secure method for credentials)
USER="your_username"
PASSWORD="your_password"

# Run SQL commands via sqlplus
sqlplus -s $USER/$PASSWORD@$ORACLE_SID <<EOF
-- Grant EXECUTE permissions on the specified DBMS packages to PUBLIC
GRANT EXECUTE ON SYS.DBMS_RANDOM TO PUBLIC;
GRANT EXECUTE ON SYS.DBMS_UTILITY TO PUBLIC;
GRANT EXECUTE ON SYS.DBMS_JOB TO PUBLIC;

-- Exit SQL*Plus
EXIT;
EOF

echo "Grants executed successfully!"

function yes_no() {
    # Initial prompt
    read -p "$1 {y/n}? " ans
    
    # Loop until user provides an answer
    while [[ -z $ans ]]; do
        echo "You have NOT provided an answer."
        read -p "Please answer {y/n}: " ans
    done

    # Handle responses using case statement
    case $ans in
        y|Y|yes|Yes|YES)
            echo "Alright, I am proceeding."
            ;;
        n|N|no|No|NO)
            echo "Roger that! Exiting..."
            exit 1  # Exit the script or return if you want just the function to exit
            ;;
        *)
            echo "Invalid input. Quitting..."
            exit 1
            ;;
    esac
}

Creating an effective questionnaire for developers requires clarity, logical grouping of questions, and a visually appealing layout to ensure the information is easy to fill out and review. Below is a template that is structured, professional, and comprehensive. This template is designed to be used in a Microsoft Word or Excel document for easy distribution and filling out.


Database and Application Information Questionnaire[Project Name / Application Name]

Prepared by: [Your Name]
Date: [Date]
Version: [Version Number]


1. Data Volume and Growth

Question Response Comments
What is the yearly volume of data? [Enter data size in MB/GB/TB]
What is the expected yearly growth rate of data? [Enter percentage, e.g., 20%]
Please provide monthly projections for data volume growth.
(Table: Month

2. Data Access Patterns

Question Response Comments
What is the daily access pattern (CRUD operations)?

- Read Operations [Enter daily count]
- Insert Operations [Enter daily count]
- Update Operations [Enter daily count]
- Delete Operations [Enter daily count]
What is the peak concurrency factor (99th percentile) for CRUD operations?
Define peak for each operation type.
- Peak Read Concurrency [Enter count]
- Peak Insert Concurrency [Enter count]
- Peak Update Concurrency [Enter count]
- Peak Delete Concurrency [Enter count]
What is the workload type for this application? [Real-time / Batch]

3. Data Retention and Archiving

Question Response Comments
What is the data retention period for the application data? [Enter number of years or specific period]
Will data be archived after the retention period? [Yes / No]
- If Yes, how will data be archived? [e.g., to external system, compressed, etc.]
What is the size of archived data per year? [Enter GB/TB]

4. Response Time and Performance

Question Response Comments
What is the expected average response time for database queries? [Enter target time in seconds, e.g., 1 sec]
Are there any performance SLAs? [Yes / No] Define SLAs (e.g., query response time during peak load)

5. Query and Data Access

Question Response Comments
How far back in time are queries expected to go? [Enter time frame, e.g., last 3 months, 1 year]
Does this differ by query type (e.g., operational vs. reporting)? [Yes / No] Specify differences if applicable
What is the expected data retrieval frequency for the last N months/years of data? [e.g., last 3 months, frequently accessed]

6. Data Migration and Integration

Question Response Comments
Are there plans to migrate data from an existing data store to this new database? [Yes / No]
What is the total size of data to be migrated? [Enter size in GB/TB]
Will data need to be synchronized during migration? [Yes / No] Specify type of synchronization (e.g., delta, real-time)

7. Database Transactions and Load

Question Response Comments
What is the average number of read requests per day? [Enter number]
- Average read requests per second [Enter number]
What is the average number of write requests per day? [Enter number]
- Average write requests per second [Enter number]
What is the peak number of read requests per day? [Enter peak load number]
- Peak read requests per second [Enter number]
What is the peak number of write requests per day? [Enter peak load number]
- Peak write requests per second [Enter number]

8. Current Technology and Architecture

Question Response Comments
What is the current data store technology and version? [Enter DB type and version, e.g., MySQL 5.7]
What is the overall technology stack for the application? [Enter DB platform, application framework, etc.]
How many application instances will connect to the Oracle database? [Enter number of instances]

9. Application Criticality and Availability

Question Response Comments
What is the impact on end users if the application is unavailable? [Enter details]
Is this a Tier-1 application? [Yes / No] Define criticality and uptime requirements
What is the expected uptime SLA for this application? [Enter percentage, e.g., 99.9%]

10. Security and Compliance

Question Response Comments
Please provide the data classification details (e.g., sensitive, confidential, public)? [Enter classification]
Are there any encryption requirements for data at rest or in transit? [Yes / No] Specify requirements (e.g., TDE, SSL/TLS)
Are there any non-English characters stored in the database? [Yes / No] E.g., multi-lingual or special characters

11. Future Growth and Scalability

Question Response Comments
What is the anticipated growth in users or transactions in the next 1-3 years? [Enter details]
What are the scalability plans for the application? [Horizontal / Vertical scaling] Specify plans for scaling

Additional Considerations

Database Failover and High Availability:

Question Response Comments
Are there any high availability (HA) or disaster recovery (DR) requirements? [Yes / No] Specify requirements (e.g., Oracle RAC, Data Guard)

Backup and Recovery Requirements:

Question Response Comments
What are the backup window and recovery objectives (e.g., RTO, RPO)? [Enter details]

Compliance and Auditing:

Question Response Comments
Are there any auditing requirements for compliance purposes? [Yes / No] Specify (e.g., user access logs, query auditing)

Instructions for Completion:

  • Please fill in all applicable fields.
  • Provide additional comments if necessary.
  • If any question is not relevant, mark it as “N/A.”
  • For data-related questions, provide estimations or ranges where possible.

Format Options:

  1. Microsoft Word Template: This is a great option for clear, readable sections and allows for easy comments or attachments.
  2. Excel Template: If you prefer a more structured format with calculations or tabular data (like for volume projections), Excel might be better.
  3. Google Forms / Survey: If you want to collect responses digitally, a survey format could be an efficient option.

Let me know if you'd like to adjust the template, add anything specific, or if you need a more detailed explanation for any section!

 

 

  1.  

     Capacity planning questionnaire 



    1. Data Volume and Growth

  2. What is the yearly volume of data?

    • Provide the volume in terms of MB/GB/TB per year.
  3. What is the expected yearly growth rate of data?

    • Provide percentage growth (e.g., 20% per year).
  4. Please provide this information in a tabular format for monthly projections.


2. Data Access Patterns

  • What is the daily access pattern (CRUD operations)?

    • Provide the details in terms of:
      • Read Operations
      • Insert Operations
      • Update Operations
      • Delete Operations
    • Please provide counts for each operation type on a daily basis.
  • What is the peak concurrency factor (99th percentile) for CRUD operations?

    • Define the peak number of concurrent transactions for each operation (e.g., peak inserts, updates, reads, deletes).
  • What is the workload type for this application?

    • Real-time or Batch processing?

3. Data Retention and Archiving

  • What is the data retention period for the application data?

    • How long will the data be retained in the database (e.g., 1 year, 5 years, permanent)?
  • Will data be archived after the retention period?

    • If so, how will data be archived (e.g., to an external system, compressed, etc.)?
  • What is the size of the archived data per year (in GB/TB)?


4. Response Time and Performance

  • What is the expected average response time for database queries?

    • Provide the target response time for typical queries (e.g., 1 second, 2 seconds).
  • Are there any performance SLAs?

    • For example, query response time under peak load or during business-critical hours.

5. Query and Data Access

  • How far back in time are queries expected to go?

    • Most queries retrieve data for how many months, days, or years?
    • Does this differ by type of query (e.g., operational vs. reporting queries)?
  • What is the expected data retrieval frequency for the last N months/years of data?

    • E.g., queries frequently access data from the last 3 months, or 1 year.

6. Data Migration and Integration

  • Are there any plans to migrate data from an existing data store to this new Oracle database?

    • If yes, please provide volumetric details for the data migration.
  • What is the total size of data to be migrated?

    • Please provide data size details in GB or TB, including any associated metadata.
  • Will data need to be synchronized during the migration (e.g., delta migration, real-time synchronization)?


7. Database Transactions and Load

  • What is the average number of read requests per day?

    • Also, provide the average read requests per second.
  • What is the average number of write requests per day?

    • Also, provide the average write requests per second.
  • What is the peak number of read requests per day (peak load)?

    • Also, provide the peak read requests per second during high usage times.
  • What is the peak number of write requests per day (peak load)?

    • Also, provide the peak write requests per second during high usage times.

8. Current Technology and Architecture

  • What is the current data store technology and version?

    • Provide details of the existing database (e.g., MySQL, MSSQL, etc.).
  • What is the overall technology stack for the application?

    • Database platform, application framework, middleware, etc.
  • How many application instances will connect to the Oracle database?

    • This includes web servers, app servers, batch processes, etc.

9. Application Criticality and Availability

  • What is the impact on end users if the application is unavailable?

    • Does the application impact business operations, customer experience, etc.?
  • Is this a Tier-1 application?

    • Define if the application is business-critical (e.g., 24/7 uptime requirement, high availability).
  • What is the expected uptime SLA for this application?

    • E.g., 99.9%, 99.99% uptime.

10. Security and Compliance

  • Please provide the data classification details (e.g., sensitive, confidential, public).

    • Does the application handle sensitive information (e.g., PII, PCI, HIPAA)?
  • Are there any encryption requirements for data at rest or in transit?

    • For example, TDE (Transparent Data Encryption) or SSL/TLS encryption.
  • Are there any non-english characters stored in the database?

    • E.g., multi-lingual data storage, special characters, or non-ASCII text.

11. Future Growth and Scalability

  • What is the anticipated growth in users or transactions in the next 1-3 years?

    • What is the expected user base or transaction volume increase?
  • What are the scalability plans for the application?

    • Will the application scale horizontally (e.g., adding more app servers, Oracle RAC) or vertically (e.g., larger instance size, more CPU/RAM)?

Additional Considerations

  • Database Failover and High Availability:

    • Are there any high availability (HA) or disaster recovery (DR) requirements, such as Oracle RAC, Data Guard, or backup solutions?
  • Backup and Recovery Requirements:

    • What are the backup window and recovery objectives (e.g., RTO, RPO)?
  • Compliance and Auditing:

    • Are there any auditing requirements (e.g., user access logs, query auditing) for compliance purposes?

     

    == App 

    1. Application Overview

  • What is this application about?
    • Description: Provide a brief overview of the application, its purpose, and the business function it supports.
    • Expected Output Sample:
      "This application is a customer relationship management (CRM) system for tracking interactions and sales data across multiple customer channels. It integrates with multiple external data sources, performs real-time analytics, and generates personalized reports for sales teams."

2. Transaction Types and Characteristics

  • What type of transactions does the application handle?
    • Description: Provide a detailed description of the types of transactions (e.g., financial, e-commerce, real-time data processing, batch processing) handled by the application.
    • Expected Output Sample:
      "The application processes transactions related to customer order placements, payment processing, inventory updates, and order shipment tracking. It also generates daily summary reports for management. Transactions are both real-time and batch-oriented, with some overnight data aggregation tasks."

3. Limiting Impact on EXADATA

  • How will the application limit its impact on the EXADATA environment?
    • Description: Provide the strategies in place to minimize resource consumption, such as optimizing queries, load balancing, or using features like Oracle Real Application Clusters (RAC), database partitioning, or other performance-enhancing technologies.
    • Expected Output Sample:
      "The application will implement connection pooling to reduce the number of direct connections to EXADATA, ensuring that the database handles fewer, more efficient connections. Additionally, SQL queries will be optimized for performance, with indexing on critical tables and partitioning of large datasets (e.g., sales data). We will also use Oracle RAC to distribute load evenly across multiple nodes."

4. Use of Connection Pooling

  • Are you going to use connection pooling?
    • Description: Specify if connection pooling will be implemented and how it will be managed (e.g., using an external connection pool, built-in Oracle connection pooling, etc.).
    • Expected Output Sample:
      "Yes, connection pooling will be implemented using Oracle's built-in connection pool (Universal Connection Pool - UCP) to ensure efficient management of database connections. This approach will reduce connection overhead and minimize resource contention on the EXADATA system. Additionally, we will configure pool sizes based on expected peak load times."

5. Determination of Capacity

  • How did you determine the capacity requirements (CPU, RAM, and storage)?
    • Description: Explain the process and methodology used to estimate the required resources for CPU, RAM, and storage. Were any load tests or benchmarks conducted? Were historical data or application profiling used?
    • Expected Output Sample:
      "The capacity requirements were determined based on the expected transaction volume and performance SLAs. We conducted load testing with the application’s expected user load and transaction rates to simulate peak loads. CPU and RAM requirements were derived based on historical performance data from the legacy system, while storage requirements were estimated based on expected data growth (e.g., 5GB/day). We also accounted for a 20% buffer for future growth and overhead."

6. CPU, RAM, and Storage Requirements Sources

  • Where did the CPU, RAM, and storage requirements come from?
    • Description: Provide a detailed explanation of where the specific requirements for CPU, RAM, and storage were sourced from (e.g., application profiling, performance monitoring, vendor recommendations, testing results).
    • Expected Output Sample:
      "CPU and RAM requirements were derived from performance benchmarks and simulations using test workloads based on historical data from the existing system. For storage, the calculation was based on an estimated data growth rate of 5GB/day, plus an additional 10TB for transactional logs and backups over the next three years. The storage requirements were validated by consulting with Oracle support for best practices and reviewing similar deployment scenarios."

7. Peak Resource Usage

  • What are the expected peak CPU, RAM, and storage usage values?
    • Description: Provide peak values for CPU, RAM, and storage, particularly during high usage periods (e.g., end of month, high traffic periods, heavy reporting times).
    • Expected Output Sample:
      "Peak CPU usage is expected to reach 80% during month-end reporting, with up to 16 cores utilized. RAM usage could peak at 256GB during heavy real-time analytics processing. Storage usage is projected to peak at 100TB due to high-frequency transactional data and backups."

8. Growth Projections and Resource Expansion

  • What is the expected rate of growth for CPU, RAM, and storage over the next 1-3 years?
    • Description: Provide estimates for resource growth over the next 1 to 3 years based on expected business expansion, transaction volume growth, or application enhancements.
    • Expected Output Sample:
      "We expect CPU usage to grow by 20% annually as the number of concurrent users increases. RAM usage is anticipated to grow by 15% per year, mainly due to the increasing complexity of queries and reporting workloads. Storage is projected to grow by 25% annually, driven by a combination of transaction data, backup growth, and new features added to the application."

9. Performance and Load Testing

  • Have you conducted any performance/load testing to determine the infrastructure requirements?
    • Description: Provide details on any testing done to validate the estimated capacity needs, including testing methodologies (e.g., load testing, stress testing) and results.
    • Expected Output Sample:
      "Yes, we conducted performance testing using load simulation tools to replicate the expected user load and transaction volumes. The tests were designed to stress the system with peak transaction rates and concurrent users. We also conducted stress testing to identify potential bottlenecks, which helped us size the CPU, RAM, and storage requirements appropriately."

10. Oracle Database Features and Impact on EXADATA

  • Are you using any Oracle-specific features (e.g., Oracle RAC, Exadata storage optimizations, Oracle Data Guard, etc.) to optimize database performance?
    • Description: Identify any Oracle features that are expected to impact the database infrastructure, particularly in terms of resource usage on EXADATA.
    • Expected Output Sample:
      "Yes, we are using Oracle Real Application Clusters (RAC) for high availability and load balancing, which will distribute the workload across multiple EXADATA nodes. We are also leveraging Exadata storage optimizations such as Hybrid Columnar Compression (HCC) to reduce the storage footprint and improve query performance. Additionally, we will implement Oracle Data Guard for disaster recovery."

11. Impact on EXADATA

  • How will you mitigate the impact of the application on EXADATA's resources (CPU, memory, and storage)?
    • Description: Provide details on how the application is designed to limit excessive resource consumption on EXADATA, such as through query optimization, load balancing, and using Exadata-specific features.
    • Expected Output Sample:
      "To mitigate impact, we will ensure that all queries are optimized, particularly complex joins and subqueries. Additionally, we will utilize partitioning to improve query performance on large tables. Connection pooling will limit the number of concurrent database connections, reducing memory overhead. We will also leverage Exadata's Flash Cache for faster access to frequently queried data, ensuring minimal impact on storage performance."

Summary of Output Samples

QuestionExpected Output
What is this application about?CRM system for sales and customer data tracking.
What type of transactions?Order placement, payment processing, inventory updates, report generation.
How will you limit impact on EXADATA?Connection pooling, SQL query optimization, use of Oracle RAC, data partitioning.
Will you use connection pooling?Yes, using Oracle's UCP to manage connections efficiently.
How did you determine capacity?Load testing, transaction volume simulation, historical data analysis.
Where did the CPU, RAM, and storage come from?Benchmarks, performance data from legacy system, storage growth rate estimates.
Peak CPU usage?80% during month-end, using 16 cores.
Growth projections?CPU: 20% growth, RAM: 15%, Storage: 25% annually.
Did you conduct load testing?Yes, using simulation tools to stress-test with peak users and transaction rates.
Oracle features impacting EXADATA?RAC for load balancing, Exadata optimizations (HCC, Flash Cache).
  •  

source /x/home/oracle/.profile

#read -p "enter PDB name:" pdb
#export PDB=$pdb

select_pdb() {
local pdb_list=($(sqlplus -s "/ as sysdba" <<EOF | grep -E '^[[:alnum:]]+$'
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
SELECT name FROM v\$pdbs WHERE name NOT IN ('CDB\$ROOT', 'PDB\$SEED');
EXIT;
EOF
))

pdb_count=${#pdb_list[@]}

if [ $pdb_count -eq 0 ]; then
echo "No PDBs available."
exit 1
elif [ $pdb_count -eq 1 ]; then
PDB_NAME=${pdb_list[0]}
echo "Using PDB: $PDB_NAME"
else
echo "Available PDBs:"
for i in "${!pdb_list[@]}"; do
echo "$((i+1)). ${pdb_list[$i]}"
done
read -p "Select the PDB to connect to (1-${pdb_count}): " pdb_choice
PDB_NAME=${pdb_list[$((pdb_choice-1))]}
fi

export PDB=$PDB_NAME
echo "Connecting to PDB: $PDB_NAME"
}

# use this code in other scripts as needed
select_pdb

# Checking again to make sure PDB is set or not
if [ -z "$PDB_NAME" ]; then
echo "No PDB selected. Exiting..."
exit 1
fi

==

#!/bin/bash
#Understood. Below is an enhanced version of the script that includes the calculation of maximum capacity, current capacity, and available capacity with a buffer. This script assumes you have predefined maximum capacities for CPU, IOPS, QPS, TPS, and sessions.


# Oracle environment variables
#export ORACLE_SID=your_oracle_sid
#export ORACLE_HOME=/path/to/your/oracle_home
#export PATH=$ORACLE_HOME/bin:$PATH

# Database credentials
#DB_USER="your_db_user"
#DB_PASS="your_db_password"

# Temporary file to store SQL output
SQL_OUTPUT="/tmp/db_capacity_analysis.txt"

# Predefined maximum capacities
MAX_CPU_UTILIZATION=60 # in percentage
MAX_IOPS=500000 # in IOPS
MAX_QPS=200000 # in QPS
MAX_TPS=200000 # in TPS
MAX_SESSIONS=10000 # in sessions

# Buffer percentage
BUFFER_PERCENTAGE=15

# Function to get current CPU utilization
get_cpu_utilization() {
sar -u 1 5 | grep "Average" | awk '{print $3 + $5}'
}

# Function to get current IOPS
get_iops() {
iostat -dx 1 5 | grep -A 1 "Device" | tail -n 1 | awk '{print $2}'
}

# Function to get current QPS and TPS
get_qps_tps() {
sqlplus -s $DB_USER/$DB_PASS <<EOF > $SQL_OUTPUT
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
SELECT SUM(value) FROM v\$sysstat WHERE name = 'execute count';
SELECT SUM(value) FROM v\$sysstat WHERE name = 'user commits' OR name = 'user rollbacks';
EXIT;
EOF
QPS=$(sed -n '1p' $SQL_OUTPUT)
TPS=$(sed -n '2p' $SQL_OUTPUT)
echo "$QPS $TPS"
}

# Function to get current session count
get_session_count() {
sqlplus -s $DB_USER/$DB_PASS <<EOF > $SQL_OUTPUT
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
SELECT COUNT(*) FROM v\$session WHERE type = 'USER';
EXIT;
EOF
cat $SQL_OUTPUT
}

# Function to get current IOPS from Oracle
get_oracle_iops() {
sqlplus -s $DB_USER/$DB_PASS <<EOF > $SQL_OUTPUT
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
SELECT SUM(value) FROM v\$sysstat WHERE name IN ('physical reads', 'physical writes');
EXIT;
EOF
cat $SQL_OUTPUT
}

# Main function to gather all metrics and calculate capacities
gather_metrics() {
echo "Gathering database capacity metrics..."

# Get current metrics
CURRENT_CPU_UTILIZATION=$(get_cpu_utilization)
CURRENT_IOPS=$(get_iops)
read CURRENT_QPS CURRENT_TPS <<< $(get_qps_tps)
CURRENT_SESSIONS=$(get_session_count)
CURRENT_ORACLE_IOPS=$(get_oracle_iops)

# Calculate available capacities with buffer
BUFFER=$(echo "scale=2; $BUFFER_PERCENTAGE / 100" | bc)
AVAILABLE_CPU_UTILIZATION=$(echo "scale=2; $MAX_CPU_UTILIZATION * (1 - $BUFFER) - $CURRENT_CPU_UTILIZATION" | bc)
AVAILABLE_IOPS=$(echo "scale=2; $MAX_IOPS * (1 - $BUFFER) - $CURRENT_IOPS" | bc)
AVAILABLE_QPS=$(echo "scale=2; $MAX_QPS * (1 - $BUFFER) - $CURRENT_QPS" | bc)
AVAILABLE_TPS=$(echo "scale=2; $MAX_TPS * (1 - $BUFFER) - $CURRENT_TPS" | bc)
AVAILABLE_SESSIONS=$(echo "scale=2; $MAX_SESSIONS * (1 - $BUFFER) - $CURRENT_SESSIONS" | bc)
AVAILABLE_ORACLE_IOPS=$(echo "scale=2; $MAX_IOPS * (1 - $BUFFER) - $CURRENT_ORACLE_IOPS" | bc)

# Print results
echo "Max Capacity for Apps:"
echo "CPU Utilization: $MAX_CPU_UTILIZATION%"
echo "IOPS: $MAX_IOPS"
echo "QPS: $MAX_QPS"
echo "TPS: $MAX_TPS"
echo "Sessions: $MAX_SESSIONS"

echo "Current Capacity of Apps:"
echo "CPU Utilization: $CURRENT_CPU_UTILIZATION%"
echo "IOPS: $CURRENT_IOPS"
echo "QPS: $CURRENT_QPS"
echo "TPS: $CURRENT_TPS"
echo "Sessions: $CURRENT_SESSIONS"

echo "Available Capacity with $BUFFER_PERCENTAGE% Buffer:"
echo "CPU Utilization: $AVAILABLE_CPU_UTILIZATION%"
echo "IOPS: $AVAILABLE_IOPS"
echo "QPS: $AVAILABLE_QPS"
echo "TPS: $AVAILABLE_TPS"
echo "Sessions: $AVAILABLE_SESSIONS"
echo "Oracle IOPS: $AVAILABLE_ORACLE_IOPS"

echo "Metrics gathering complete."
}

# Run the main function
gather_metrics
Explanation:
Environment Variables: Set the Oracle environment variables (ORACLE_SID, ORACLE_HOME, and PATH).
Database Credentials: Define the database user and password.
Temporary File: Create a temporary file to store SQL output.
Predefined Maximum Capacities: Define the maximum capacities for CPU utilization, IOPS, QPS, TPS, and sessions.
Buffer Percentage: Define the buffer percentage to be used in capacity calculations.
Functions:
get_cpu_utilization: Uses sar to get current CPU utilization.
get_iops: Uses iostat to get current IOPS.
get_qps_tps: Uses sqlplus to get current QPS and TPS from Oracle.
get_session_count: Uses sqlplus to get the current session count.
get_oracle_iops: Uses sqlplus to get current IOPS from Oracle.
Main Function: Gathers current metrics, calculates available capacities with the buffer, and prints the results.
Usage:
Save the script to a file, e.g., db_capacity_analysis.sh.
Make the script executable: chmod +x db_capacity_analysis.sh.
Run the script: ./db_capacity_analysis.sh.
Ensure you replace placeholders like your_oracle_sid, your_oracle_home, your_db_user, and your_db_password with actual values specific to your environment.

collect_files.sh

#!/bin/bash

# Define the output file
output_file="file_structure.txt"

# Clear the previous output (if any)
> "$output_file"

# Function to check if a file is a text file (not binary)
is_text_file() {
    # Use the 'file' command to check file type
    file_type=$(file --mime-type -b "$1")
    if [[ "$file_type" == text/* ]]; then
        return 0  # It's a text file
    else
        return 1  # It's a binary file
    fi
}

# Function to resolve absolute path in a portable way
get_absolute_path() {
    # If the system is macOS, resolve the absolute path manually
    if [[ "$(uname)" == "Darwin" ]]; then
        # Resolve path manually for macOS
        dir_name=$(dirname "$1")
        base_name=$(basename "$1")
        full_path=$(cd "$dir_name" && pwd)/"$base_name"
        echo "$full_path"
    else
        # For Linux or other systems, use realpath or readlink
        realpath "$1" 2>/dev/null || readlink -f "$1"
    fi
}

# Function to traverse directories and collect file info
collect_files() {
    for file in "$1"/*; do
        if [ -d "$file" ]; then
            # It's a directory, recursively call the function
            collect_files "$file"
        elif [ -f "$file" ]; then
            # It's a file, check if it's a text file
            if is_text_file "$file"; then
                # It's a text file, collect its details
                file_name=$(basename "$file")
                # Resolve absolute path in a portable way
                file_path=$(get_absolute_path "$file")
                file_content=$(cat "$file")
                echo "File: $file_name" >> "$output_file"
                echo "Path: $file_path" >> "$output_file"
                echo "Content:" >> "$output_file"
                echo "$file_content" >> "$output_file"
                echo "----------------------" >> "$output_file"
            else
                echo "Skipping binary file: $file" >> "$output_file"
            fi
        fi
    done
}

# Start from the current directory
collect_files "."

echo "File structure saved to $output_file."
 

 

 

 Script 2: rebuild_files.sh

#!/bin/bash

# Define the input file (file structure)
input_file="file_structure.txt"

# Check if the input file exists
if [ ! -f "$input_file" ]; then
    echo "Error: File structure file not found!"
    exit 1
fi

# Function to rebuild the file structure
rebuild_files() {
    while IFS= read -r line; do
        if [[ "$line" =~ ^File:\ (.*) ]]; then
            file_name="${BASH_REMATCH[1]}"
        elif [[ "$line" =~ ^Path:\ (.*) ]]; then
            file_path="${BASH_REMATCH[1]}"
        elif [[ "$line" =~ ^Content:$ ]]; then
            # Start reading file content
            content=""
            while IFS= read -r content_line && [[ ! "$content_line" =~ ^---------------------- ]]; do
                content="$content$content_line"$'\n'
            done
            # Create directory if it doesn't exist
            dir_name=$(dirname "$file_path")
            mkdir -p "$dir_name"
            # Create the file and write content
            echo -n "$content" > "$file_path"
            echo "Recreated file: $file_path"
        fi
    done < "$input_file"
}

# Rebuild files and directories
rebuild_files

echo "File structure rebuilt from $input_file."
 

 

===

Below are the steps to create a test schema with tables, and a script to insert unique rows into the table every minute.

Step 1: Create a Test Schema and Tables

First, connect to your Oracle database using SQL*Plus or any other database client and execute the following SQL commands to create a schema and tables:

-- Create a new user/schema
CREATE USER test_schema IDENTIFIED BY password;

-- Grant necessary privileges to the new user
GRANT CONNECT, RESOURCE TO test_schema;

-- Connect to the new schema
ALTER SESSION SET CURRENT_SCHEMA = test_schema;

-- Create a table with a primary key
CREATE TABLE test_table1 (
    id NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
    data VARCHAR2(100)
);

-- Create another table with a primary key
CREATE TABLE test_table2 (
    id NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
    description VARCHAR2(100)
);

Step 2: Create a Script to Insert Unique Rows

Next, create a shell script that will insert unique rows into the tables every minute. Save the following script as insert_rows.sh:

#!/bin/bash

# Oracle database credentials
DB_USER="test_schema"
DB_PASS="password"
DB_SID="ORCL"

# Function to insert unique rows
insert_rows() {
    sqlplus -s $DB_USER/$DB_PASS@$DB_SID <<EOF
    WHENEVER SQLERROR EXIT SQL.SQLCODE;
    INSERT INTO test_table1 (data) VALUES ('Data ' || TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
    INSERT INTO test_table2 (description) VALUES ('Description ' || TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
    COMMIT;
    EXIT;
EOF
}

# Infinite loop to insert rows every minute
while true; do
    insert_rows
    sleep 60
done

Step 3: Make the Script Executable

Make the script executable by running the following command:

chmod +x insert_rows.sh

Step 4: Run the Script in Background

You can run the script in the background using nohup so that it continues to run even after you log out:

nohup ./insert_rows.sh &

Step 5: Verify the Inserts

You can verify that the rows are being inserted by querying the tables:

SELECT * FROM test_table1;
SELECT * FROM test_table2;

This setup will ensure that unique rows are inserted into the tables every minute. Adjust the script and SQL commands as needed for your specific environment and requirements.

 

Two servers have been identified for configuring replication. The necessary binaries have been set up, and Oracle GoldenGate is configured and installed. Local storage with sufficient capacity for testing purposes has been identified and should be used.

Before setting up the extract and replicate processes, create a test schema in both Database 1 and Database 2, and include a few tables in each schema.

Once the replication setup is complete, verify that any data inserted into the tables in Database 1 is successfully replicated to the corresponding schemas in Database 2.

 

select round (used.bytes /1024/1024, 2) used_mb ,
round (free.bytes / 1024/1024 , 2) free_mb ,
round (tot.bytes / 1024/1024 ,2) total_mb
from (select sum (bytes) bytes from v$sgastat
where name != 'free memory') used
, (select sum (bytes) bytes from v$sgastat where name = 'free memory') free
, (select sum (bytes) bytes from v$sgastat) tot ;


#!/usr/bin/perl
########
# Run on LIVE database to duplicate the service definitions"
########
use Getopt::Std;
getopts('d:s:t:',\%opts);
$DB=$opts{d}||die "Please Enter Database with -d option\n";
$TDB=$opts{t}||die "Please Enter TARGET Database with -t option\n";
$Svc=$opts{s}||warn "Please Enter (Optional) service with -s option\n";
$Cmd="srvctl config service -d $DB";
$Cmd.=" -s $Svc" if defined($opts{s});
print "\t-> Executing $Cmd\n";
@AllSvcs=`$Cmd`;
$PrevSrv="";
for $i (@AllSvcs) {
$Tmp=(split(':',$i))[-1] ;
chomp $Tmp;
$Srv=$Tmp if $i =~ /^Service name:/;
if ($PrevSrv ne $Srv) {
if ( $TmpCmd ne "" ) {
print "$TmpCmd\n";
}
$TmpCmd=" srvctl add service -d $TDB"
}
$TmpCmd.=" -s $Tmp" if $i =~ /^Service name:/;
$TmpCmd.=" -a $Tmp" if ($i =~ /^Available instances:/ and $i =~ /^Available instances:\s*\S+$/) ;
$TmpCmd.=" -r $Tmp" if $i =~ /^Preferred instances:/;
$TmpCmd.=" -P $Tmp" if $i =~ /^TAF policy specification:/;
$TmpCmd.=" -y $Tmp" if $i =~ /^Management policy:/;
$TmpCmd.=" -l $Tmp" if $i =~ /^Service role:/;
$PrevSrv=$Srv;
}
print "$TmpCmd\n";

#duplicate services#


Testing

Sure, let's break down the SQL queries in the provided PL/SQL code into smaller parts for testing. We'll focus on the collectSGAStatsNoDelta function and the collectCustomStats function separately.

Breakdown for collectSGAStatsNoDelta

  1. Test the v$sgainfo table selection:

    SELECT *
    FROM v$sgainfo
    WHERE name NOT IN ('Maximum SGA Size', 'Startup overhead in Shared Pool', 'Granule Size');
    
    • This will help you verify that the v$sgainfo table contains the expected data and the WHERE clause is filtering correctly.
  2. Test the v$pgastat table selection:

    SELECT *
    FROM v$pgastat;
    
    • This will help you verify that the v$pgastat table contains the expected data.
  3. Test the v$sgastat table selection:

    SELECT *
    FROM v$sgastat
    WHERE name = 'free memory';
    
    • This will help you verify that the v$sgastat table contains the expected data and the WHERE clause is filtering correctly.
  4. Test the inner query combining v$sgainfov$pgastat, and v$sgastat:

    SELECT NVL(&CONIDNAME, 0) con_id, name, ORA_HASH(name) namehash, bytes / 1024 / 1024 size_mb,
           SUM(bytes / 1024 / 1024) OVER () sga_size_mb
    FROM v$sgainfo
    WHERE name NOT IN ('Maximum SGA Size', 'Startup overhead in Shared Pool', 'Granule Size')
    UNION ALL
    SELECT NVL(&CONIDNAME, 0) con_id, name, ORA_HASH(name) namehash,
           DECODE(NVL(unit, 'x'), 'bytes', value / 1024 / 1024, value) value,
           NULL, NULL
    FROM v$pgastat
    UNION ALL
    SELECT NVL(&CONIDNAME, 0) con_id, pool || ':' || name, ORA_HASH(pool || ':' || name) namehash,
           bytes / 1024 / 1024 size_mb,
           NULL, NULL
    FROM v$sgastat
    WHERE name = 'free memory';
    
    • This will help you verify that the combined query is working correctly.
  5. Test the outer query that calculates additional statistics:

    SELECT con_id, name, namehash,
           ROUND(size_mb, 2) pool_size_mb,
           ROUND(sga_size_mb / 1024, 2) sga_size_gb,
           ROUND((size_mb * 100) / sga_size_mb, 2) per_sga
    FROM (
        SELECT NVL(&CONIDNAME, 0) con_id, name, ORA_HASH(name) namehash, bytes / 1024 / 1024 size_mb,
               SUM(bytes / 1024 / 1024) OVER () sga_size_mb
        FROM v$sgainfo
        WHERE name NOT IN ('Maximum SGA Size', 'Startup overhead in Shared Pool', 'Granule Size')
        UNION ALL
        SELECT NVL(&CONIDNAME, 0) con_id, name, ORA_HASH(name) namehash,
               DECODE(NVL(unit, 'x'), 'bytes', value / 1024 / 1024, value) value,
               NULL, NULL
        FROM v$pgastat
        UNION ALL
        SELECT NVL(&CONIDNAME, 0) con_id, pool || ':' || name, ORA_HASH(pool || ':' || name) namehash,
               bytes / 1024 / 1024 size_mb,
               NULL, NULL
        FROM v$sgastat
        WHERE name = 'free memory'
    );
    
    • This will help you verify that the calculations are correct.
  6. Test the final sga_stats CTE:

    WITH sga_stats AS (
        SELECT NULL service_name, con_id, p_stat_type, namehash id, namehash, name, 1 seq_no, ROUND(pool_size_mb, 2) val1, 'Pool size (mb)' val1_name,
               ROUND(sga_size_gb, 2) val2, 'sga size (gb)' val2_name,
               ROUND(per_sga, 2) val3, '% sga' val3_name,
               NULL delta_flag, NULL val4, NULL val4_name
        FROM (
            SELECT con_id, name, namehash,
                   ROUND(size_mb, 2) pool_size_mb,
                   ROUND(sga_size_mb / 1024, 2) sga_size_gb,
                   ROUND((size_mb * 100) / sga_size_mb, 2) per_sga
            FROM (
                SELECT NVL(&CONIDNAME, 0) con_id, name, ORA_HASH(name) namehash, bytes / 1024 / 1024 size_mb,
                       SUM(bytes / 1024 / 1024) OVER () sga_size_mb
                FROM v$sgainfo
                WHERE name NOT IN ('Maximum SGA Size', 'Startup overhead in Shared Pool', 'Granule Size')
                UNION ALL
                SELECT NVL(&CONIDNAME, 0) con_id, name, ORA_HASH(name) namehash,
                       DECODE(NVL(unit, 'x'), 'bytes', value / 1024 / 1024, value) value,
                       NULL, NULL
                FROM v$pgastat
                UNION ALL
                SELECT NVL(&CONIDNAME, 0) con_id, pool || ':' || name, ORA_HASH(pool || ':' || name) namehash,
                       bytes / 1024 / 1024 size_mb,
                       NULL, NULL
                FROM v$sgastat
                WHERE name = 'free memory'
            )
        )
    )
    SELECT *
    FROM sga_stats;
    
    • This will help you verify that the sga_stats CTE is working correctly.

Breakdown for collectCustomStats

  1. Test the x$kghlu table selection:

    SELECT NVL(SUM(kghlufsh), 0), NVL(SUM(kghlunfu), 0)
    FROM x$kghlu;
    
    • This will help you verify that the x$kghlu table contains the expected data.
  2. Test the v$dlm_misc table selection:

    SELECT SUM(value)
    FROM v$dlm_misc
    WHERE name IN ('gcs msgs received', 'ges msgs received');
    
    • This will help you verify that the v$dlm_misc table contains the expected data and the WHERE clause is filtering correctly.
  3. Test the v$system_event table selection:

    SELECT &CONIDNAME con_id, NVL(SUM(DECODE(wait_class, 'Cluster', TOTAL_WAITS, 0)), 0) clu_waits, NVL(SUM(DECODE(wait_class, 'Cluster', TIME_WAITED_MICRO, 0)), 0) / 1000 clu_time,
           NVL(SUM(DECODE(event, 'Failed Logon Delay', TOTAL_WAITS, 0)), 0) failed_logon
    FROM v$&CONPREFIX.system_event
    WHERE (v_dbmetadata.cluster_db = 'TRUE' AND WAIT_CLASS = 'Cluster')
       OR event = 'Failed Logon Delay'
    GROUP BY &CONIDNAME;
    
    • This will help you verify that the v$system_event table contains the expected data and the WHERE clause is filtering correctly.
  4. Test the custom_stats CTE:

    WITH custom_stats AS (
        SELECT NULL service_name, 0 con_id, p_stat_type, 2 id, 2 id1, 'Shared Pool Flush' name, 1 seq_no, l_sp_flush val1, 'Value' val1_name, NULL val2, NULL val2_name,
               NULL val3, NULL val3_name, NULL deltaFlag, NULL val4, NULL val4_name
        FROM dual
        UNION ALL
        SELECT NULL service_name, 0 con_id, p_stat_type, 3 id, 3 id1, 'Shared Pool 4031 Error count' name, 1 seq_no, l_sp_4031 val1, 'Value' val1_name, NULL val2,
               NULL val2_name, NULL val3, NULL val3_name, NULL deltaFlag, NULL val4, NULL val4_name
        FROM dual
        UNION ALL
        SELECT NULL service_name, con_id, p_stat_type, 4 id, 4 id1, 'Cluster Waits' name, 1 seq_no, clu_waits val1, 'Total Wait count' val1_name, clu_time val2,
               'Total Wait Time' val2_name, failed_logon val3, 'Failed Logon Delay' val3_name, NULL deltaFlag, NULL val4, NULL val4_name
        FROM (
            SELECT &CONIDNAME con_id, NVL(SUM(DECODE(wait_class, 'Cluster', TOTAL_WAITS, 0)), 0) clu_waits, NVL(SUM(DECODE(wait_class, 'Cluster', TIME_WAITED_MICRO, 0)), 0) / 1000 clu_time,
                   NVL(SUM(DECODE(event, 'Failed Logon Delay', TOTAL_WAITS, 0)), 0) failed_logon
            FROM v$&CONPREFIX.system_event
            WHERE (v_dbmetadata.cluster_db = 'TRUE' AND WAIT_CLASS = 'Cluster')
               OR event = 'Failed Logon Delay'
            GROUP BY &CONIDNAME
        )
        UNION ALL
        SELECT NULL service_name, 0 con_id, p_stat_type, 5 id, 5 id1, 'Cluster Msg Received' name, 1 seq_no, l_msg_recv val1, 'GCS/GES Msg Received' val1_name, NULL val2,
               NULL val2_name, NULL val3, NULL val3_name, NULL deltaFlag, NULL val4, NULL val4_name
        FROM dual
        WHERE v_dbmetadata.cluster_db = 'TRUE'
    )
    SELECT *
    FROM custom_stats;
    
    • This will help you verify that the custom_stats CTE is working correctly.

By breaking down the queries into smaller parts, you can test each component individually to ensure that each part works correctly before combining them into the final query.

Endoftest

Certainly! Let's integrate the collectCustomStats and collectSGAStatsNoDelta functions into the shell script. The shell script will call these PL/SQL functions, collect the statistics, and take appropriate actions based on the collected data.

Step 1: Create the PL/SQL Functions

First, ensure that the collectCustomStats and collectSGAStatsNoDelta functions are created in your Oracle database. You can create them as follows:

CREATE OR REPLACE FUNCTION collectSGAStatsNoDelta(
    p_stat_type IN VARCHAR2,
    p_n_stats OUT SYS.ODCIVARCHAR2LIST
) RETURN DATE IS
    l_action VARCHAR2(60) := 'collectSGAStatsNoDelta';
    l_end_date DATE;
BEGIN
    -- Set session info (customize as needed)
    -- set_session_info('A', l_action);

    -- Collect SGA statistics without delta
    WITH sga_stats AS (
        SELECT NULL service_name, con_id, p_stat_type, namehash id, namehash, name, 1 seq_no, ROUND(pool_size_mb, 2) val1, 'Pool size (mb)' val1_name,
               ROUND(sga_size_gb, 2) val2, 'sga size (gb)' val2_name,
               ROUND(per_sga, 2) val3, '% sga' val3_name,
               NULL delta_flag, NULL val4, NULL val4_name
        FROM (
            SELECT con_id, name, namehash,
                   ROUND(size_mb, 2) pool_size_mb,
                   ROUND(sga_size_mb / 1024, 2) sga_size_gb,
                   ROUND((size_mb * 100) / sga_size_mb, 2) per_sga
            FROM (
                SELECT NVL(&CONIDNAME, 0) con_id, name, ORA_HASH(name) namehash, bytes / 1024 / 1024 size_mb,
                       SUM(bytes / 1024 / 1024) OVER () sga_size_mb
                FROM v$sgainfo
                WHERE name NOT IN ('Maximum SGA Size', 'Startup overhead in Shared Pool', 'Granule Size')
            )
            UNION ALL
            SELECT NVL(&CONIDNAME, 0) con_id, name, ORA_HASH(name) namehash,
                   DECODE(NVL(unit, 'x'), 'bytes', value / 1024 / 1024, value) value,
                   NULL, NULL
            FROM v$pgastat
            UNION ALL
            SELECT NVL(&CONIDNAME, 0) con_id, pool || ':' || name, ORA_HASH(pool || ':' || name) namehash,
                   bytes / 1024 / 1024 size_mb,
                   NULL, NULL
            FROM v$sgastat
            WHERE name = 'free memory'
        )
    )
    SELECT *
    BULK COLLECT INTO p_n_stats
    FROM sga_stats;

    -- Set session info (customize as needed)
    -- set_session_info('A', l_action);

    l_end_date := SYSDATE;
    RETURN l_end_date;
END collectSGAStatsNoDelta;
/

CREATE OR REPLACE FUNCTION collectCustomStats(
    p_stat_type IN VARCHAR2,
    p_n_stats OUT SYS.ODCIVARCHAR2LIST
) RETURN DATE IS
    l_action VARCHAR2(60) := 'collectCustomStats';
    l_end_date DATE;
    l_sp_flush NUMBER;
    l_sp_4031 NUMBER;
    l_clu_wait NUMBER;
    l_clu_time NUMBER;
    l_msg_recv NUMBER;
BEGIN
    -- Set session info (customize as needed)
    -- set_session_info('A', l_action);

    -- Collect shared pool statistics
    SELECT NVL(SUM(kghlufsh), 0), NVL(SUM(kghlunfu), 0)
    INTO l_sp_flush, l_sp_4031
    FROM x$kghlu;

    -- Collect cluster statistics if applicable
    IF v_dbmetadata.cluster_db = 'TRUE' THEN
        SELECT SUM(value)
        INTO l_msg_recv
        FROM v$dlm_misc
        WHERE name IN ('gcs msgs received', 'ges msgs received');
    END IF;

    -- Collect custom statistics
    WITH custom_stats AS (
        SELECT NULL service_name, 0 con_id, p_stat_type, 2 id, 2 id1, 'Shared Pool Flush' name, 1 seq_no, l_sp_flush val1, 'Value' val1_name, NULL val2, NULL val2_name,
               NULL val3, NULL val3_name, NULL deltaFlag, NULL val4, NULL val4_name
        FROM dual
        UNION ALL
        SELECT NULL service_name, 0 con_id, p_stat_type, 3 id, 3 id1, 'Shared Pool 4031 Error count' name, 1 seq_no, l_sp_4031 val1, 'Value' val1_name, NULL val2,
               NULL val2_name, NULL val3, NULL val3_name, NULL deltaFlag, NULL val4, NULL val4_name
        FROM dual
        UNION ALL
        SELECT NULL service_name, con_id, p_stat_type, 4 id, 4 id1, 'Cluster Waits' name, 1 seq_no, clu_waits val1, 'Total Wait count' val1_name, clu_time val2,
               'Total Wait Time' val2_name, failed_logon val3, 'Failed Logon Delay' val3_name, NULL deltaFlag, NULL val4, NULL val4_name
        FROM (
            SELECT &CONIDNAME con_id, NVL(SUM(DECODE(wait_class, 'Cluster', TOTAL_WAITS, 0)), 0) clu_waits, NVL(SUM(DECODE(wait_class, 'Cluster', TIME_WAITED_MICRO, 0)), 0) / 1000 clu_time,
                   NVL(SUM(DECODE(event, 'Failed Logon Delay', TOTAL_WAITS, 0)), 0) failed_logon
            FROM v$&CONPREFIX.system_event
            WHERE (v_dbmetadata.cluster_db = 'TRUE' AND WAIT_CLASS = 'Cluster')
               OR event = 'Failed Logon Delay'
            GROUP BY &CONIDNAME
        )
        UNION ALL
        SELECT NULL service_name, 0 con_id, p_stat_type, 5 id, 5 id1, 'Cluster Msg Received' name, 1 seq_no, l_msg_recv val1, 'GCS/GES Msg Received' val1_name, NULL val2,
               NULL val2_name, NULL val3, NULL val3_name, NULL deltaFlag, NULL val4, NULL val4_name
        FROM dual
        WHERE v_dbmetadata.cluster_db = 'TRUE'
    )
    SELECT *
    BULK COLLECT INTO p_n_stats
    FROM custom_stats;

    -- Collect SGA statistics without delta
    l_end_date := collectSGAStatsNoDelta(p_stat_type, p_n_stats);

    RETURN l_end_date;
END;
/

Step 2: Create the Shell Script

Next, create a shell script that calls these PL/SQL functions, collects the statistics, and takes appropriate actions based on the collected data.

#!/bin/bash

# Define Oracle environment variables
ORACLE_HOME="/path/to/oracle_home"
ORACLE_SID="your_sid"
PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_HOME ORACLE_SID PATH

# Define the email address to send alerts to
EMAIL="your_email@example.com"

# Define the threshold for shared pool flushes and ORA-04031 errors
SHARED_POOL_FLUSH_THRESHOLD=100
ORA_04031_THRESHOLD=10

# Run the PL/SQL function and capture the output
sqlplus -s / as sysdba <<EOF > /tmp/custom_stats.txt
SET SERVEROUTPUT ON
DECLARE
    p_n_stats SYS.ODCIVARCHAR2LIST;
    l_end_date DATE;
BEGIN
    l_end_date := collectCustomStats('A', p_n_stats);
    FOR i IN 1 .. p_n_stats.COUNT LOOP
        DBMS_OUTPUT.PUT_LINE(p_n_stats(i));
    END LOOP;
END;
/
EOF

# Parse the output and check the statistics
SHARED_POOL_FLUSH=$(grep 'Shared Pool Flush' /tmp/custom_stats.txt | awk '{print $NF}')
ORA_04031_COUNT=$(grep 'Shared Pool 4031 Error count' /tmp/custom_stats.txt | awk '{print $NF}')

# Check if the statistics exceed the thresholds
if [ "$SHARED_POOL_FLUSH" -gt "$SHARED_POOL_FLUSH_THRESHOLD" ] || [ "$ORA_04031_COUNT" -gt "$ORA_04031_THRESHOLD" ]; then
    # Send an alert email
    mail -s "Custom Stats Alert" "$EMAIL" < /tmp/custom_stats.txt

    # Take necessary actions to avoid ORA-04031
    # For example, flush the shared pool
    echo "ALTER SYSTEM FLUSH SHARED_POOL;" | sqlplus -s / as sysdba
fi

Step 3: Schedule the Shell Script Using Cron

To run the script at regular intervals, add a cron job. Edit the crontab file using crontab -e and add an entry like this to run the script every 5 minutes:

*/5 * * * * /path/to/your/monitor_custom_stats.sh

Explanation

  1. Oracle Environment Variables:

    • Set the ORACLE_HOME and ORACLE_SID environment variables to point to your Oracle installation and database SID.
  2. Email Address:

    • Define the email address to send alerts to.
  3. Thresholds:

    • Define the thresholds for shared pool flushes and ORA-04031 errors.
  4. PL/SQL Function Call:

    • The script calls the collectCustomStats function using sqlplus and captures the output in /tmp/custom_stats.txt.
  5. Parse the Output:

    • The script parses the output to extract the shared pool flush count and ORA-04031 error count.
  6. Check Thresholds:

    • The script checks if the statistics exceed the defined thresholds. If they do, it sends an alert email and takes necessary actions (e.g., flushing the shared pool).
  7. Schedule with Cron:

    • The script is scheduled to run every 5 minutes using a cron job.

This script will monitor the custom statistics collected by the collectCustomStats function, check if the statistics exceed the defined thresholds, and take appropriate actions such as sending an alert email and flushing the shared pool to avoid ORA-04031 errors. You can customize the thresholds and actions as needed to suit your environment.


 #!/bin/bash


# Load environment variables
source ~/.profile

# Check for correct number of arguments
if [ $# -ne 2 ]; then
echo "Usage: $0 <Threshold> <SID>"
exit 1
fi

# Set variables
pct_alert=$1
ORACLE_SID=$2
export ORACLE_SID

# Determine ORATAB location
ORATAB="/etc/oratab"
[ -f /var/opt/oracle/oratab ] && ORATAB="/var/opt/oracle/oratab"

# Get ORACLE_HOME from ORATAB
ORACLE_HOME=$(grep "^$ORACLE_SID:" $ORATAB | awk -F":" '{print $2}')
export ORACLE_HOME

# Define log file paths
logbase="/u01/home/oracle/logs"
tmpoutfile="$logbase/softlimit_alert.tmp"
tmpoutfile2="$logbase/processlimit_alert.tmp"
outfile="$logbase/softlimit_alert.out"
detaillog="$logbase/softlimit_alert_det.out"
htmlout="$logbase/softlimit_alert.html"
emaillog="$logbase/softlimit_alert1.out"
mailto="alerts@jilebi.com"

# Clean up old log files
rm -f $outfile $emaillog $detaillog $htmlout

# Define session termination log file
Sess_terminate_logfile="$logbase/Terminate_session_$(basename "$0").$(hostname -s).$ORACLE_SID.$(date '+%y%m').log"
echo "================== $(date '+%F %T') ====================" >> $Sess_terminate_logfile

# Main SQL execution
sqlplus -S /nolog <<EOF
conn / as sysdba

spool $Sess_terminate_logfile append
set lines 300
col PROGRAM format a30
col osuser format a20
col machine format a25
col CON_NAME format a20
select decode((select count(*) from dba_objects where object_name='V\$CONTAINERS'),0,'Not Available',sys_context('USERENV','CON_NAME')) CON_NAME,
sid,serial#,username,OSUSER,MACHINE,PROGRAM,LOGON_TIME,trunc(LAST_CALL_ET/60) last_call_mins,status
from v\$session
where username in ('DIABLO','VANAD') and status='INACTIVE' and last_call_et/60/60>2;

set serverout on
declare
begin
for i in (select sid,serial#,username,OSUSER,MACHINE,PROGRAM,LOGON_TIME,trunc(LAST_CALL_ET/60) last_call,status
from v\$session
where username in ('DIABLO','VANAD') and status='INACTIVE' and last_call_et/60/60>2)
loop
dbms_output.put_line('Terminating session -->'||i.sid||','||i.serial#||'......');
execute immediate ('alter system kill session '''||i.sid||','||i.serial#||''' immediate');
end loop;
end;
/
spool off

set lines 180 feed off pagesize 0 head off termout off term off
col USERNAME for a32
col PDB_NAME format a20
col profile format a50
alter session disable parallel query;

spool $tmpoutfile
select /*+ noparallel */ con_id, PDB_NAME, username, profile, conneciton_limit, number_of_sessions, pct_soft
from (
select vc.con_id, vc.name PDB_NAME, username, profile, conneciton_limit, number_of_sessions,
round(100 * number_of_sessions/conneciton_limit) pct_soft
from (
select a.con_id, a.username, profile, least(b.value, c.limits) conneciton_limit, number_of_sessions
from (
select username, con_id, count(*) number_of_sessions
from v\$session
where con_id > 0 and username is not null
group by username, con_id
) a,
(select to_number(value) value from v\$parameter where name='processes') b,
(select u.con_id, username, p.profile, to_number(decode(limit, 'UNLIMITED', 65535, limit)) limits
from cdb_users u, cdb_profiles p
where limit != 'DEFAULT' and u.profile = p.profile and resource_name = 'SESSIONS_PER_USER' and account_status = 'OPEN'
and u.con_id = p.con_id
order by u.con_id, username
) c
where a.con_id = c.con_id and a.username = c.username
) data, v\$containers vc
where data.con_id = vc.con_id
) final;
spool off

col RESOURCE_NAME for a32
spool $tmpoutfile2
select RESOURCE_NAME, CURRENT_UTILIZATION, LIMIT_VALUE, round(to_number(CURRENT_UTILIZATION) * 100 / to_number(LIMIT_VALUE)) percent
from v\$resource_limit
where RESOURCE_NAME in ('processes', 'sessions');
spool off
EOF

# Exit if instance is not opened in read-only mode
if grep -q 'ORA-01219' $tmpoutfile; then
exit 0
fi

# Check session profile and processes
grep -v PCT_SOFT $tmpoutfile | awk -v pctalert=$pct_alert '{ if ($7 > pctalert) print $0 }' > $outfile
grep -v PERCENT $tmpoutfile2 | awk -v pctalert=$pct_alert '{ if ($4 > pctalert) print $0 }' >> $outfile

if [ -s $outfile ]; then
# For each user that has a high number of connections
grep -v PCT_SOFT $tmpoutfile | awk -v pctalert=$pct_alert '{ if ($7 > pctalert) print $0 }' | grep -iv pct_soft | grep -iv ^-| grep -iv ^$ | awk '{print $2,$3,$4}' | uniq | while read line; do
pdb=$(echo $line | awk '{print $1}')
username=$(echo $line | awk '{print $2}')
profile=$(echo $line | awk '{print $3}')
echo $pdb, $username, $profile

sqlplus -S /nolog <<EOF
conn / as sysdba
set verify off pages 10000 feedback on define off
col PDB_NAME format a20
col profile format a30
alter session disable parallel query;
spool $htmlout append
PROMPT Container, Instance and Module Breakdown for user $username --- profile $profile
PROMPT
set lines 200 feedback off
col PDB_NAME format a20
col module format a32
col number_of_sessions format 999,999

select /*+ noparallel */ c.con_id, c.name, s.username, count(*) number_of_sessions
from v\$Session s, v\$containers c
where s.username = '${username}' and s.con_id = c.con_id
group by c.con_id, c.name, s.username;

select /*+ noparallel */ c.con_id, c.name, s.username, s.module, count(*) number_of_sessions
from v\$Session s, v\$containers c
where s.username = '${username}' and s.con_id = c.con_id
group by c.con_id, c.name, s.username, s.module;
spool off
EOF
done

echo "
=============================================
All connections on the Oracle RAC
=============================================" > $detaillog

sqlplus -S /nolog <<EOF
connect / as sysdba
alter session disable parallel query;
spool $detaillog append
col module for a60
col pdb_name format a20
col profile format a30
col username format a32
col module format a30
col number_of_sessions format 999,999
set lines 200 feedback off
select /*+ noparallel */ con_id, PDB_NAME, username, profile, conneciton_limit, number_of_sessions, pct_soft
from (
select vc.con_id, vc.name PDB_NAME, username, profile, conneciton_limit, number_of_sessions,
round(100 * number_of_sessions/conneciton_limit) pct_soft
from (
select a.con_id, a.username, profile, least(b.value, c.limits) conneciton_limit, number_of_sessions
from (
select username, con_id, count(*) number_of_sessions
from v\$session
where con_id > 0 and username is not null
group by username, con_id
) a,
(select to_number(value) value from v\$parameter where name='processes') b,
(select u.con_id, username, p.profile, to_number(decode(limit, 'UNLIMITED', 65535, limit)) limits
from cdb_users u, cdb_profiles p
where limit != 'DEFAULT' and u.profile = p.profile and resource_name = 'SESSIONS_PER_USER' and account_status = 'OPEN'
and u.con_id = p.con_id
order by u.con_id, username
) c
where a.con_id = c.con_id and a.username = c.username
) data, v\$containers vc
where data.con_id = vc.con_id
) final order by pct_soft desc;

select p.process_hardlimit, number_of_sessions
from (
select count(*) number_of_sessions from v\$Session
) b,
(select to_number(value) process_hardlimit from v\$parameter where name = 'processes') p;

col RESOURCE_NAME for a32
select RESOURCE_NAME, CURRENT_UTILIZATION, LIMIT_VALUE, round(to_number(CURRENT_UTILIZATION) * 100 / to_number(LIMIT_VALUE)) percent
from v\$resource_limit
where RESOURCE_NAME in ('processes', 'sessions');
spool off
EOF

echo "To: $mailto
Subject: ALERT: $(hostname -s):$ORACLE_SID softlimit utilization threshold breached
Content-Type: text/html; charset='us-ascii'
<html>
<body>
<p/>
<font face='Courier' color='blue'>
<pre>
Please check the following users for softlimit utilization ( > ${pct_alert}%)

o Escalate to L2 oncall
o L2 - Verify there are no jdbc connections and then stop service on problematic node
o L2 - Use ./kill_sess.sh script to identify and kill inactive connections

" >> $emaillog
echo "Hostname : $(hostname -s)" >> $emaillog
echo "Instance : $ORACLE_SID" >> $emaillog
echo "Users nearing softlimit (${pct_alert}%)
" >> $emaillog

cat $detaillog >> $emaillog
cat $htmlout >> $emaillog
echo "</pre></font>
</body>
</html>" >> $emaillog

cat $emaillog | sendmail -t

echo "Details written into $logfile_alert"

echo "std_subject => $(hostname -s):$ORACLE_SID connection (Session, Processes, SESSIONS_PER_USER) limit threshold breached" >> $logfile_alert
echo "std_incident_type => ALERT" >> $logfile_alert
cat $detaillog >> $logfile_alert
cat $htmlout >> $logfile_alert
fi


############################################################################################################
############################################################################################################
############################################################################################################
o simulate a scenario in an Oracle database where you can use the above script to check if it's working as intended, you can create a script that generates a large number of inactive sessions. This will help you test the soft limit monitoring script.

Below is a script that creates multiple inactive sessions in the Oracle database:

-- Create a user for testing
CREATE USER test_user IDENTIFIED BY test_password;
GRANT CONNECT, RESOURCE TO test_user;

-- Create a procedure to generate inactive sessions
CREATE OR REPLACE PROCEDURE create_inactive_sessions AS
BEGIN
FOR i IN 1..100 LOOP
EXECUTE IMMEDIATE 'ALTER SESSION SET "_wait_for_sync"=TRUE';
EXECUTE IMMEDIATE 'BEGIN DBMS_LOCK.SLEEP(3600); END;';
END LOOP;
END;
/

-- Run the procedure to create inactive sessions
BEGIN
create_inactive_sessions;
END;
/


Steps to Simulate the Scenario:
Connect to the Oracle Database as SYSDBA:

sqlplus / as sysdba
Run the above SQL script to create a user and generate inactive sessions:

@create_inactive_sessions.sql
Verify the inactive sessions:

SELECT username, status, COUNT(*) AS session_count
FROM v$session
WHERE username = 'TEST_USER'
GROUP BY username, status;
Run the soft limit monitoring script:

./softlimit_monitor.sh 80 ORCL
Explanation:
Create a User for Testing:

The script creates a user test_user with the password test_password.
Grants the necessary privileges to the user.
Create a Procedure to Generate Inactive Sessions:

The procedure create_inactive_sessions generates 100 inactive sessions.
Each session is put to sleep for 3600 seconds (1 hour) to simulate inactivity.
Run the Procedure:

The procedure is executed to create the inactive sessions.
Verify the Inactive Sessions:

The SQL query checks the number of inactive sessions for the user test_user.
Run the Soft Limit Monitoring Script:

The soft limit monitoring script is executed with a threshold of 80% and the SID ORCL.
Cleanup:
After testing, you can clean up the database by dropping the test user and the procedure:

-- Drop the test user and the procedure
DROP USER test_user CASCADE;
DROP PROCEDURE create_inactive_sessions;
This script will help you simulate the scenario and test if the soft limit monitoring script is working as intended. Make sure to adjust the script according to your database environment and requirements.


==== GG CHECK

 #!/bin/bash

# ggs_rep_status.sh
#
# Check Goldengate replicat/extract for STOP or ABEND status and any replicat/extract lagging more than threshold.
# Email DBAs about it.
# By default, all extracts/replicats/pumps/userexits are TIER=2 (and threshold=600 sec).
# To change the default lag threshold, set the TIER by passing 1 or 3 as an argument to this script.

####### SCRIPT STARTS ##################################

# Load environment variables
source ~/.profile

# Check for correct number of arguments
if [ $# -lt 1 ]; then
echo "Usage: $0 <TIER> [SID]"
exit 1
fi

# Set variables
TIERTYPE=$1
instname=${2:-$(hostname)}
logbase="/u01/home/oracle/logs"
scriptname=$(basename "$0")
logfile_info="$logbase/scripts/$scriptname.$(hostname -s).$instname.log.info"
logfile_warn="$logbase/scripts/$scriptname.$(hostname -s).$instname.log.warn"
logfile_alert="$logbase/scripts/$scriptname.$(hostname -s).$instname.log.alert"
logfile_report="$logbase/scripts/$scriptname.$(hostname -s).$instname.log.report"
logfile_alert_subject="$logbase/scripts/$scriptname.$(hostname -s).$instname.log.alert.subject"

# Clean up old log files
rm -f $logfile_info $logfile_warn $logfile_alert $logfile_report $logfile_alert_subject

# Check if ORACLE_HOME is defined
if [ -z "$ORACLE_HOME" ]; then
echo "$ORACLE_SID gg_rep_status.sh FAILED as ORACLE_HOME is undefined" >> $logfile_alert
echo "ALERT: gg_rep_status.sh FAILED as ORACLE_HOME is undefined" > $logfile_alert_subject
exit 1
fi

# Get GGS_HOME from running processes
for GGS_HOME in $(ps -afe | grep ./mgr | grep PARAMFILE | grep -v grep | sed -e "s/.*PARAMFILE //;s/\/dirprm.*//"); do
if [ ! -x "${GGS_HOME}/ggsci" ]; then
echo "$ORACLE_SID gg_rep_status.sh FAILED as GGS_HOME is undefined" >> $logfile_alert
echo "ALERT: gg_rep_status.sh FAILED as GGS_HOME is undefined" > $logfile_alert_subject
continue
fi

# Check for _system_trig_enabled=FALSE
if ps -ef | grep ora_pmon | grep -v grep > /dev/null; then
echo "This is a db host, need to check for _system_trig_enabled=FALSE"
TRIGDISABLED=$(sqlplus -s / as sysdba <<EOF
set heading off
set echo off
set feedback off
set pagesize 0
alter session set parallel_force_local=TRUE;
select upper(value) from gv\$parameter where name='_system_trig_enabled';
exit
EOF
)
echo "value of TRIGDISABLED is $TRIGDISABLED"

TRIGDISABLED2=$(sqlplus -s / as sysdba <<EOF
set heading off
set echo off
set feedback off
set pagesize 0
alter session set parallel_force_local=TRUE;
select upper(status) from dba_triggers where trigger_name='GGS_DDL_TRIGGER_BEFORE' and owner='SYS';
exit
EOF
)
echo "status of trigger GGS_DDL_TRIGGER_BEFORE is $TRIGDISABLED2" >> $logfile_info

if echo "$TRIGDISABLED" | grep -iq FALSE; then
echo "_system_trig_enable is set to FALSE on $(hostname -s). DDL replication will be broken and DML-DATA replication corrupted for the tables which were altered while _system_trig_enable was set to FALSE. Please find out why this was set to FALSE and arrange to have this unset or set to TRUE." >> $logfile_alert
echo "ALERT: gg_rep_status.sh _system_trig_enable is set to FALSE on $(hostname -s)" > $logfile_alert_subject
else
echo "_system_trig_enable is NOT SET or NOT SET to FALSE" >> $logfile_info
fi

if echo "$TRIGDISABLED2" | grep -iq DISABLED; then
echo "sys owned trigger GGS_DDL_TRIGGER_BEFORE is disabled on $(hostname -s). DDL replication will be broken and DML-DATA replication corrupted for the tables which were altered while this trigger was disabled. Please find out why this was disabled and arrange to have this enabled." >> $logfile_alert
echo "ALERT: gg_rep_status.sh sys owned trigger GGS_DDL_TRIGGER_BEFORE is disabled on $(hostname -s)" > $logfile_alert_subject
else
echo "GGS_DDL_TRIGGER_BEFORE is enabled. Ok" >> $logfile_info
fi
fi

# Check if config file exists, if not create a default one
FILE="$GGS_HOME/../admin/ggs_rep_status.conf"
if [ ! -f "$FILE" ] || [ ! -s "$FILE" ]; then
echo "File $FILE does not exist ... creating default config file ggs_rep_status.conf" >> $logfile_info
cat <<EOF > $FILE
## ggs_rep_status Config ##
## Version$Revision: 1.2 $
##
## Primary purpose right now is for excluding certain Extract/replicat lag or status from being alerted on.
##
## IF an EXTRACT/REPLICAT needs to be excluded from a LAG check, include it in the EXCLUDE_LAG list.
## IF an EXTRACT/REPLICAT needs to be excluded from a status check (STOPPED/ABENDED to be ignored),
## then include it in the EXCLUDE_STATUS list.
## If only a specific extract/replicat needs to be given a specific lag threshold, with all others a generic
## tier 1/2/3 threshold, include the extract/replicat name with its lag threshold in seconds, separated by "_"
## in the SPECIAL_LAG section below. Please put only a space between 2 GG components.
##
## Separated by comma
EXCLUDE_LAG="NULL,XREPLICAT,EFDMSOL,EXML,EXML2"
EXCLUDE_STATUS="NULL,YEXTRACT,EFDMSOL"
##
## lag threshold in seconds, separated by "_", see examples below
SPECIAL_LAG=("XREPLICAT_36000" "ZEXTRACT_14400" "XCFJAT2_3600")
EOF
fi

# Read config file
while IFS='= ' read -r lhs rhs; do
if [[ ! $lhs =~ ^\ *# && -n $lhs ]]; then
rhs="${rhs%%\#*}" # Del in line right comments
rhs="${rhs%%*( )}" # Del trailing spaces
declare $lhs="$rhs"
fi
done < $FILE

echo "EXCLUDE_LAG = $EXCLUDE_LAG"
echo "EXCLUDE_STATUS = $EXCLUDE_STATUS"
echo "SPECIAL_LAG = $SPECIAL_LAG"

# Set logfile names and threshold for Lag
LONGTHRESHOLD=1200
INFOLOG="$logbase/scripts/infolog.out"
LOGOUT="$logbase/scripts/lag.out"

# Remove old logfile if exists from previous run
rm -f $LOGOUT

# Get all replicat/extract status and lag info
cd $GGS_HOME
./ggsci <<EOF > $INFOLOG
info all
exit
EOF

# Check for status STOPPED or ABENDED and Total Lag
echo "REP/EXT STATUS NAME Actual_lag TimeSinceCkpt Tier Acceptable_lag+timeSinceCkpt_sec Actual_lag+timeSinceCkpt_sec" > $LOGOUT
echo "======================================================================================================================================================" >> $LOGOUT
egrep 'REPLICAT|EXTRACT' $INFOLOG | while read -r line; do
threshold=600 # Default tier 2 threshold

if [ -n "$TIERTYPE" ]; then
if [ "$TIERTYPE" -eq 3 ]; then
threshold=10800
elif [ "$TIERTYPE" -eq 1 ]; then
threshold=360
fi
else
TIERTYPE=2
fi

GGNAME=$(echo $line | awk '{ print $3 }')
GGTYPE=$(echo $line | awk '{ print $1 }')

if echo "$EXCLUDE_LAG" | grep -iq $GGNAME; then
IGNORELAG=YES
else
IGNORELAG=NO
fi

if echo "$EXCLUDE_STATUS" | grep -iq $GGNAME; then
IGNORESTATUS=YES
IGNORELAG=YES
else
IGNORESTATUS=NO
fi

if [ "$GGTYPE" = "EXTRACT" ]; then
EXTNAME=$(echo $line | awk '{ print $3 }')
extname=$(echo $EXTNAME | tr '[A-Z]' '[a-z]')

RMTHOST=$(grep -i RMTHOST $GGS_HOME/dirprm/$extname.prm | awk '{ print $2 }')
if echo "$RMTHOST" | grep -iq replica; then
TOPUMP=YES
else
TOPUMP=NO
fi

if grep -iq PASSTHRU $GGS_HOME/dirprm/$extname.prm; then
PUMP=YES
else
PUMP=NO
fi

if grep -q .props $GGS_HOME/dirprm/$extname.prm; then
USEREXIT=YES
else
USEREXIT=NO
fi

if [ "$TOPUMP" = "NO" ] && [ "$PUMP" = "NO" ] && [ "$USEREXIT" = "NO" ]; then
threshold=10800
fi

if [[ "$PUMP" = "NO" && "$USEREXIT" = "NO" && "$EXTNAME" != "EXML" && "$EXTNAME" != "EXML2" ]]; then
echo "send extract $EXTNAME showtrans duration ${LONGTHRESHOLD}MINUTES" | $GGS_HOME/ggsci > /tmp/longgg.log
sleep 15
echo "send extract $EXTNAME showtrans duration ${LONGTHRESHOLD}MINUTES" | $GGS_HOME/ggsci >> /tmp/longgg.log
sleep 20
echo "send extract $EXTNAME status" | $GGS_HOME/ggsci
sleep 10
echo "send extract $EXTNAME showtrans duration ${LONGTHRESHOLD}MINUTES" | $GGS_HOME/ggsci >> /tmp/longgg.log
if grep -q "No transactions found" /tmp/longgg.log; then
LONGRUNNING=NO
else
LONGRUNNING=YES
fi
fi
fi

# Check if this GG component has a separate lag threshold specified
for special_lag in "${SPECIAL_LAG[@]}"; do
if echo $special_lag | grep -iq $GGNAME; then
threshold=$(echo $special_lag | awk -F_ '{ print $2 }')
fi
done

lag=$(echo $line | awk '{print $4}' | awk -F: '{print $1*3600+$2*60+$3}')+$(echo $line | awk '{print $5}' | awk -F: '{print $1*3600+$2*60+$3}')
status=$(echo $line | awk '{print $2}')

if [[ $status = "ABENDED" && $IGNORESTATUS = "NO" ]] || [[ $status = "STOPPED" && $IGNORESTATUS = "NO" ]] || [[ $lag -gt $threshold && $IGNORELAG = "NO" ]]; then
echo "$line $TIERTYPE $threshold $lag" >> $LOGOUT
echo "--------------------------------------------------------------------------------------------------------------------------" >> $LOGOUT
extrep=$(echo $line | awk '{print $3}')
echo "ALERT: gg_rep_status.sh EXTRACT/REPLICAT is lagging over $threshold secs on $(hostname -s)" > $logfile_alert_subject
fi

if [[ $GGTYPE = "EXTRACT" && $LONGRUNNING = "YES" && $IGNORESTATUS = "NO" && $PUMP = "NO" && $USEREXIT = "NO" ]]; then
echo "=================================================================================" >> $LOGOUT
echo " EXTRACT $GGNAME has Long Running Transaction Older than $LONGTHRESHOLD MINUTES or send command unresponsive" >> $LOGOUT
echo " " >> $LOGOUT
cat /tmp/longgg.log >> $LOGOUT
echo " ---------------------------------------------------------------------------------" >> $LOGOUT
echo " Please CHECK!. Use the following command" >> $LOGOUT
echo " send $EXTNAME showtrans duration XXXXMINUTES" >> $LOGOUT
echo " for example XID: 12768.19.5213 should be in gv-transaction xidusn.xidslot.xidsqn" >> $LOGOUT
echo " Gv-transaction.ADDR=v-session.TADDR Find sql_id, event, seconds_in_wait for the session" >> $LOGOUT
echo "=================================================================================" >> $LOGOUT
echo "ALERT: gg_rep_status.sh $GGNAME has Long Running Transaction Older than $LONGTHRESHOLD MINUTES on $(hostname -s)" > $logfile_alert_subject
fi
done

# Notify alertsdba if replicat/status is stopped/abended, has long running txns or lag exceeds threshold
if egrep 'REPLICAT|EXTRACT' $LOGOUT > /dev/null; then
cat $LOGOUT >> $logfile_alert
fi
done

3## raj
connect / as sysdba
alter session disable parallel query;
spool $detaillog append
col module for a60
col username format a32
col profile format a30
col module format a30
col number_of_sessions format 999,999
set lines 200 feedback off

select /*+ noparallel */ username, profile, connection_limit, number_of_sessions, pct_soft
from (
select a.username, a.profile, least(b.value, c.limits) connection_limit, number_of_sessions,
round(100 * number_of_sessions / connection_limit) pct_soft
from (
select username, profile, count(*) number_of_sessions
from v$session
where username is not null
group by username, profile
) a,
(select to_number(value) value from v$parameter where name = 'processes') b,
(select username, profile, to_number(decode(limit, 'UNLIMITED', 65535, limit)) limits
from dba_users u, dba_profiles p
where limit != 'DEFAULT' and u.profile = p.profile and resource_name = 'SESSIONS_PER_USER' and account_status = 'OPEN'
order by username
) c
where a.username = c.username
) final
order by pct_soft desc;

spool off;

## raj

#!/bin/bash

# Define environment variables
ORACLE_HOME=/path/to/oracle/home
ORACLE_SID=your_database_sid
export ORACLE_HOME ORACLE_SID PATH=$ORACLE_HOME/bin:$PATH

# Output log file
LOG_FILE="high_session_details_$(date +%Y%m%d_%H%M%S).log"

# SQL script to get session details
SQL_SCRIPT=$(cat <<EOF
SET ECHO OFF
SET FEEDBACK OFF
SET LINESIZE 200
SET PAGESIZE 100
COLUMN username FORMAT A20
COLUMN profile FORMAT A20
COLUMN status FORMAT A10
COLUMN program FORMAT A30
COLUMN machine FORMAT A30
COLUMN osuser FORMAT A20
COLUMN count FORMAT 999,999

-- Check session counts and high utilization
SPOOL $LOG_FILE

PROMPT High Session Utilization Summary:
SELECT username, COUNT(*) AS count
FROM v\$session
WHERE username IS NOT NULL
GROUP BY username
ORDER BY count DESC;

PROMPT
PROMPT High Resource Usage by Sessions:
SELECT s.username, s.osuser, s.machine, s.program, s.status, COUNT(*) AS count
FROM v\$session s
WHERE s.username IS NOT NULL
GROUP BY s.username, s.osuser, s.machine, s.program, s.status
ORDER BY count DESC;

PROMPT
PROMPT Session Limits and Profiles:
SELECT u.username, u.profile, p.resource_name, p.limit
FROM dba_users u, dba_profiles p
WHERE u.profile = p.profile
AND p.resource_name = 'SESSIONS_PER_USER'
ORDER BY u.username;

PROMPT
PROMPT Active Sessions:
SELECT s.sid, s.serial#, s.username, s.status, s.osuser, s.machine, s.program
FROM v\$session s
WHERE s.status = 'ACTIVE'
ORDER BY s.username;

SPOOL OFF
EOF
)

# Execute SQL script
echo "Connecting to Oracle and fetching high session details..."
sqlplus -s "/ as sysdba" <<EOF
$SQL_SCRIPT
EXIT;
EOF

# Output log location
echo "Session details logged to: $LOG_FILE"

# Check if the log file indicates high session usage and notify if needed
HIGH_SESSION_COUNT=$(grep -E "^[ ]*[0-9]+," $LOG_FILE | awk '{sum += $2} END {print sum}')
THRESHOLD=100 # Define a threshold for high sessions

if [[ $HIGH_SESSION_COUNT -gt $THRESHOLD ]]; then
echo "High session count detected: $HIGH_SESSION_COUNT sessions."
echo "Please review the log file: $LOG_FILE"
else
echo "Session count is within normal limits."
fi