ORACLE 11G PERFORMANCE TUNING PDF

adminComment(0)

Oracle Database 11g Release 2 () New Features in Oracle Database Performance .. xviii. Part I Performance Tuning. 1 Performance Tuning. 10g and 11g. Graham DB Time Tuning – Tuning using fundamental notion of time spent in database New lingua franca for Oracle performance analysis. Improvements in Oracle Database 11g . Oracle 11g Automatic SQL Tuning automates the process There's a lot more to SQL performance than bad plans!.


Oracle 11g Performance Tuning Pdf

Author:KEIKO WOODDELL
Language:English, Dutch, Portuguese
Country:Philippines
Genre:Religion
Pages:709
Published (Last):21.01.2016
ISBN:917-6-49517-389-5
ePub File Size:18.76 MB
PDF File Size:9.47 MB
Distribution:Free* [*Sign up for free]
Downloads:36591
Uploaded by: AMBERLY

with the DBA to discover what the source of some performance . Only since 11g R2 does Oracle support recursive common table expressions. . Note: You can automatically generate documentation (HTML, PDF, CHM. Results 7 - 14 Oracle University and Compueducacion, S.A. De C.V. use only Oracle Database 11g: Performance Tuning Volume I • Student Guide DGC Oracle Database 11g: Performance Tuning 9 - 4. Parse Phase. Parsing is one of the stages in the processing of a SQL statement. When an application issues a.

For example, for inner join operations, the rows are returned if they are equal. If they are not equal, whichever row has the lower value is discarded and another row is obtained from that input. This process repeats until all rows have been processed. Sometimes the index may not be used if queries are framed as: Queries involving!

Avoid index merges especially if using rule based optimizer Minimize table lookups in queries Join Techniques Sort merge join does not require indexes. Each table is first sorted on the column values used to join the tables and then merges the two sorted result set into one.

Nested loops join Nested loop involves an index on at least one of the table. In this case a full table scan is done on one table. For every row found a lookup is performed on the second table usually the one which is indexed In a hash join, a hash table is constructed for the larger table. The smaller table is then scanned and the hash table is then used to find matching rows in the larger table Using Join Techniques Use nested loops when only a small subset of the rows needs to be accessed and an index is available to support it.

Use sort and hash merge if joining most or all rows of the tables. When determining the best join order try to start with the table which will return the smallest number of rows, providing that the subsequent tables can be joined efficiently. The optimizer sometimes automatically translates IN-based subqueries into a join. However Oracle has some special optimization for it. Avoid over indexing, specially for columns which are frequently updated.

The baseline statistics are a set of statistics that are taken when the instance is running acceptably. Examine the differences to determine what has changed on the system.

Did the data change? Is the session producing the report waiting on something?

Oracle 11g Performance Tuning

Consider common performance errors: From your list of differences in the collected statistics, make a comparison with common performance errors. Determine whether one of these errors has occurred on your system. Build a trial solution: Include a conceptual model in your solution.

The purpose of this model is to assist you with the overall picture of the database. You are looking for answers to the following questions: Implement and measure the change: After you have developed the trial solution, make the appropriate change. Make only one change at a time. If you make multiple changes at the same time, you will not know which change is effective. If the changes do not solve the problem, you would not know whether some changes helped and others hindered. Collect statistics to measure the change.

If you determine that more tuning is required, return to step 3 and repeat the process. If your solution meets the goal, make the current set of statistics the new baseline set. You met your goal! Stop tuning! Which of the following is not a tuning step? Develop a trial solution. Capture statistics. Identify the problem. Take a backup. Test the solution and measure the change. The Oracle database server software captures information about its own operation. Three major types of data are collected: The raw counts have little meaning until the counts are compared over time.

The events that collect the most time tend to be the most important. The statistics in Oracle Database 11g are correlated by the use of a time model. The time model statistics are based on a percentage of DB time, giving them a common basis for comparison.

The unit could be a time measure, such as seconds, or other measure such as transaction, session, allocated space, or event. Metrics provide a basis to proactively monitor performance. You can set thresholds on a metric causing an alert to be generated.

Many of the statistics used for performance tuning are held in memory-based dynamic tables and views.

These statistics are not saved when the instance is shut down. The alert log can give important information about the operation of the database, areas that could be tuned, and reference information related to the tuning reports. These trace files can sometimes give you insight into performance problems, such as warnings in the LGWR trace file about redo-writes taking more than ms, but are primarily for capturing error conditions and debugging information.

The Tuning Pack is also separately licensed. The Tuning Pack requires the Diagnostics Pack. The advantages of AWR is that it manages storage of this data automatically and provides improved interpretation of the performance data.

The Statspack snapshots and AWR snapshots are not compatible. Some of Enterprise Manager pages related to tuning are available for any level of the Oracle Database software: Personal, Standard Edition, or Enterprise Edition.

Several of the Oracle University and Compueducacion, S. In high-volume online transaction processing OLTP environments, you may allow longer user response time to get more total transactions from many users. Studies have shown that in a Web-based environment, user response time must be less than 7 seconds or the user goes somewhere else. In this case, everything else is subordinate to response time.

Business requirements affect tuning goals. In a business environment where down time may be measured in hundreds or thousands of dollars per minute, the overhead of protecting the instance from failure and reducing recovery time is more important than the user response time. So tuning recovery must balance the ongoing overhead of additional disk writes to maintain redo log files and goal of protecting the business from loss.

At a glance you can see the top timed events. The top timed events will always have some values. The top timed events section is available in both AWR and Statspack reports. The events report here provides a direction for further investigation. In this case, the top two timed events indicate a problem in the database buffer cache.

DB Time Tuning is not just about reducing waits. Sometimes these go together, but in other cases there is a trade-off for example, with a parallel query. In general, you can say that tuning is the avoidance of consuming or holding resources in a wasteful manner.

Any request to the database is composed of two distinct segments: The wait time is the sum of all the waits for various database instance resources. The CPU time is the sum of the time that is spent actually working on the request. These times are not necessarily composed of one wait and one block of CPU time.

Often processes will wait a short time for a DB resource and then run briefly on the CPU, and do this repeatedly. Tuning consists of reducing or eliminating the wait time and reducing the CPU time. By comparing CPU time with wait time, you can determine how much of the response time is spent on useful work and how much on waiting for resources potentially held by other processes. As a general rule, systems where CPU time is dominant usually need less tuning than the ones where wait time is dominant.

The proportion of wait time to CPU time always tends to increase as the load on the system increases, steep increases in wait time are a sign of contention and must be addressed for good scalability. When contention is evidenced by increased wait time, adding more CPUs to a node, or nodes to a cluster, would provide very limited benefit. Time Model: Overview There are many components involved in tuning an Oracle database system and each has its own set of statistics.

How can you measure the expected benefit from a tuning action on the overall system? For example, would the overall performance improve if you move memory from the buffer cache to the shared pool? When you look at the system as a whole, time is the only common ruler for comparison across components. In the Oracle database server, most of the advisories report their findings in time. This instrumentation helps the Oracle database server to identify quantitative effects on the database operations.

The most important of the time model statistics is DB time. This statistic represents the total time spent in database calls by user sessions and indicates the total instance workload. It is the sum of the CPU and wait times of all sessions not waiting on idle wait events non-idle user sessions.

The objective for tuning an Oracle database system could be stated as reducing the time that users spend in performing some action on the database, or simply reducing DB time. Time Model Statistics Hierarchy The relationships between the time model statistics are listed in the slide. They form two trees: The time reported by a child in the tree is contained within the parent in the tree.

Amount of elapsed time in microseconds spent performing database user-level calls. This does not include the time spent on instance background processes such as PMON. DB time is measured cumulatively from the time that the instance was started. Because DB time is calculated by combining the times from all non-idle user sessions, it is possible for the DB time to exceed the actual time elapsed since the instance started.

For example, an instance that has been running for 30 minutes could have four active user sessions whose cumulative DB time is approximately minutes. Amount of CPU time in microseconds spent on database user-level calls.

This time include processes on the runqueue. Amount of elapsed time spent getting the next sequence number from the data dictionary. If a sequence is cached, then this is the amount of time spent replenishing the cache when it runs out. No time is charged when a sequence number is found in the cache. Amount of elapsed time spent parsing SQL statements.

It includes both soft and hard parse time. Amount of elapsed time SQL statements are executing. Amount of elapsed time spent performing session connect and disconnect calls. Amount of time spent performing SQL parses that ultimately fail with some parse error. Amount of time spent performing SQL parses that fail with out of shared memory error.

Amount of elapsed time spent performing SQL hard parses when the hard parse resulted from not being able to share an existing cursor in the SQL cache. Amount of elapsed time spent performing SQL hard parses when the hard parse resulted from bind type or bind size mismatch with an existing cursor in the SQL cache.

This does not include time spent recursively executing or parsing SQL statements or time spent recursively executing the Java Virtual Machine. Amount of elapsed time spent running the Java VM. Elapsed time spent on re-binding.

Amount of CPU time in microseconds consumed by database background processes. Total time spent in the database by background sessions CPU time and non-idle wait time. The time model information is also available in the Statspack report. Waits are taking more time than CPU time. This indicates that the application is scalable, and Oracle University and Compueducacion, S.

True b. Adding more CPUs will not help performance. Dynamic Performance Views The Oracle database server maintains a dynamic set of data about the operation and performance of the instance.

These dynamic performance views are based on virtual tables that are built from memory structures inside the database server. That is, they are not conventional tables that reside in a database.

Dynamic performance views include the raw information used by AWR and Statspack and detail information about but not limited to: Dynamic Performance Views: The three examples shown in the slide answer the following questions: What are the SQL statements and their associated number of executions where the CPU time consumed is greater than , microseconds?

What are the session IDs of any sessions that are currently holding a lock that is blocking another user, and how long has that lock been held? Dynamic performance views are built on memory structures that hold the statistics, and allow you to view many of the statistics that are used in performance tuning. Most contain information about a specific component of the instance.

In this course, you use the information from the dynamic performance views indirectly to tune specific components. For a complete list of the various dynamic performance views, refer to Oracle Database Reference. Most of the dynamic performance views are not frequently used directly by the DBA.

At times, it is helpful to know that these views exist to investigate details. Considerations Some dynamic views contain data that is not applicable to all states of an instance or database.

These views are based on memory structures and the data in them is cumulative since startup. The data is reset at startup. Because all the reads on these views are current reads, there is no locking mechanism on these views, so there is no guarantee that the data would be read consistent. You occasionally see anomalies in the statistics when one or more tables related to a particular statistic were updated, but not all the tables had completed the update when the select occurred.

The values for this parameter are: No advisory or other statistical data is collected. Many of the statistics required for a performance baseline are not collected. Oracle strongly recommends that you do not disable statistic gathering.

This is the default value. Data is collected for segment-level statistics, timed statistics, and all advisories. The value of other statistic collection parameters is overridden. These parameters are: Accepts the following values: No statistics collected, but memory is allocated.

Statistics collected and memory allocated. Their system wide state is not changed. Instance Activity and Wait Event Statistics The Oracle Database maintains several metrics that reflect internal activity within an instance. These metrics are exposed to DBAs via dynamic performance views.

Many of these views reflect a set of statistical counters that are initialized to 0 at instance startup and they are incremented until the instance is shut down. Instance activity and wait event statistics are the two classes of metrics that generally drive the performance tuning investigation process. Instance activity statistics are provided by developers to help debug various software features. They may or may not relate directly to wait events or other metrics.

Not all statistic names are documented. Remember that these are only symptoms of problems, not the actual causes. Both Statspack and AWR take snapshots of this data, make calculations based on those snapshots, and provide reports of the derived information. It is necessary to create classes for those statistics because there are a large number of them. Each statistic may belong to one or more classes. The following class numbers are additive: You can query this view to find cumulative totals since the instance started.

The service name allows collection of statistics by a connection service name. This is very useful for performance monitoring by application. Every user that connects uses a specific service name per application. Example There are always two services defined: Service data is cumulative from the instance startup.

You can query this view to find service cumulative totals since the instance started. Example Determine the sessions that consume more than 30, bytes of PGA memory. You can query this view to find cumulative totals of detailed SGA usage since the instance started. These waits indicate that there are some buffers in the buffer cache that multiple processes are attempting to access concurrently.

This event is accompanied by three parameters: These parameters identify the block number in the data file that is identified by the file number for the block for which the server needs to wait. Each place in the kernel points to a different reason.

ID refers to the place in the session calling this event. Wrapping cannot be performed because the checkpoint for that log has not completed.

This event has no parameter. Wait Classes The many different wait events possible in Oracle Database 11g are categorized into wait classes on the basis of the solutions related to that event. Each event is related to only one wait class. This enables high-level analysis of the wait events. For example, exclusive transaction TX locks are generally an application-level issue and segment space management HW locks are generally a configuration issue. The following are the most commonly occurring wait classes: The Other class contains waits that should not typically occur on a system.

Most of the wait event statistics are the same for each view. The differences are shown above. When you are troubleshooting, you need to know if a process has waited for any resource. When you are troubleshooting, you need to know whether a process has waited for any resource. You can then investigate further to see whether such waits occur frequently and whether they can be correlated with other phenomena, such as the use of particular modules.

The wait events shown above are idle wait class events that always appear. They do not indicate a problem. Commonly Observed Wait Events The slide shows a list of wait events and component areas that could be the source of these waits. The internal definitions of these wait events can change from version to version, and may cause other events to become more common. For a description of all wait events for a particular database version, see the Oracle Database Reference associated with the database version.

The statistics for wait events at the system level are most useful when they can be correlated to a time period with known activity. A practical method is to capture the statistics into a table with a time and then capture the statistics again later.

This will allow you to compare the change in the statistic values in a particular time period. This removes the statistics related to instance startup, and narrows the set of possible issues. Statspack and AWR follow this method with the use of snapshots to capture statistics, wait events, and metrics. This view is helpful in diagnosing sessions that are proceeding very slowly or appear to be hung.

It is a dynamic initialization parameter. Current wait state. Possible values are: Not all of the parameter columns are used for all events.

Precision of System Statistics The Oracle database captures certain performance data with millisecond and microsecond granularity. Views that include microsecond and millisecond timings are listed in the slide. The actual granularity of the timing event depends on the Operating system. Existing time columns in other views capture centisecond times. The timing information gathered on system statistics is cumulative since the instance was started. Some session-level views of statistics record the timing of a single event.

These statistics are the basis for all instance tuning. The Oracle University and Compueducacion, S. The raw statistics are not as useful as the delta values provided by snapshots that cover periods of interest. Using Features of the Packs The management packs names and features are listed on the left part of the slide. They all require a separate license that can be downloadd only with Enterprise Edition. The following are part of this pack: Therefore, to use the Tuning Pack, you must also have a Diagnostic Pack.

If you cannot download the packs mentioned above, especially the Database Diagnostics and Database Tuning packs, you can still use the traditional approach to performance tuning and monitoring by using Statspack reports, SQL traces, and most of the base statistics as shown in the previous slide.

The settings for this parameter are: Host name is the name or address of your computer. The default port is The Enterprise Manager Database Home page is your starting point to monitor and administer your database.

Use the Database Home page to: General database health and performance information is presented on the Home page with links and drilldowns to detailed information. Metrics are presented on the Database Home page in the following categories: This category provides a quick view of the status of the database and provides basic information about the database.

This category displays a bar chart showing relative CPU utilization of the Oracle database host. Two values appear in the bar chart. The darker color bar at the bottom represents how much of the CPU this instance is consuming. The upper, lighter color represents all other processes. These colors correspond to the legend.

The chart shows the latest value instead of a historical value. This category displays the current response of the tracked set of SQL versus the baseline response.

If the baseline and response time are equal, the system Oracle University and Compueducacion, S. The lower the response time, the more efficiently the SQL statements execute. The link from Performance Findings takes you to the ADDM page, which provides a performance analysis table with findings that need attention. ADDM uses snapshots of database activity to perform a top-down analysis of your database activity. Using this category, you can identify storage-related issues and provide recommendations for improved performance.

For example, if this number were 99, This category displays status of items related to High Availability. The first item is a link to the High Availability Console. The time and success of the last backup is displayed as a link to the View Backup Report page. When Oracle Restart is disabled, the status is a link to a page to enable Restart; after Restart is enabled, the status is no longer a link.

The following are additional categories that appear on the Database Home page that are not shown on the slide: To get more information about an alert, you will click the corresponding message. Show the most recent findings reported by an ADDM task. Shows a summary of the policy rules that are violated, in the Security, Configuration and Storage areas.

Click the links to get more information about the specific rules or the overall compliance score. This category displays a report of the Enterprise Manager job executions, showing the scheduled, running, suspended, and problem executions. If a value other than 0 appears in a field, you can click the number to go to the Job Activity page, where you can view information about all scheduled, currently running, and past jobs. The alert log is stored in two forms in different directories.

The alert log file of a database is a chronological log of messages and errors, including the following: All recovery actions are logged. You can also view the log to see non-critical error and informative messages. Using Alert Log Information as an Aid in Tuning The information listed in the slide and additional information are written to the alert log. The information written to the alert log changes somewhat with each version of the Oracle database.

Some values, such as the checkpoint start and end times, are written only when requested. The alert log file can grow to an unmanageable size. You can safely delete the alert log while the instance is started, although you should consider making an archived copy of it first. This archived copy could prove valuable if you should have a future problem that requires investigating the history of an instance. Both versions of the alert log, text and XML, should be trimmed periodically.

For example, suppose the DBA noticed a change in performance statistics.

Oracle SQL Tuning.pdf

The DBA finds that an instance parameter has changed since the last baseline. To confirm that the performance change corresponds to the parameter change, the alert log can be searched. System parameters with non-default values: Instance-Level Tracing Instance-level tracing should only be enabled when absolutely necessary. Session-Level Tracing The following statement enables the writing to a trace file for a particular session: Typically only a DBA has the permissions required to enable tracing on any session.

This script is located in the following directory: Background Processes Trace Files The background processes create these files. In general, these files contain diagnostic information, not information regarding performance tuning. However, by using events, information regarding performance can be written to these files.

Database events can be set by the DBA, but usually done so only under the supervision of Oracle Support. These files are difficult to read because they are intended for diagnoses and troubleshooting by Oracle Support, but they can contain valuable information that a DBA can use.

An exception to this rule is the event that can be used to trace the optimizer choices. This event will be explained later. Alert log c. Enterprise Manage home page d. Using Basic Tools This practice covers the following topics: Automatic Workload Repository: Overview AWR is the infrastructure that provides services to Oracle Database 11g components to collect, maintain, and utilize statistics for problem detection and self-tuning purposes.

The AWR infrastructure consists of two major parts: These statistics are stored in memory for performance reasons. Statistics are stored in persistent storage for several reasons: When old statistics are replaced by new ones due to memory shortage, the replaced data can be stored for later use. The memory version of the statistics is transferred to disk on a regular basis by a background process called MMON Manageability Monitor. With AWR, the Oracle database server provides a way to capture historical statistics data automatically, without the intervention of DBAs.

AWR stores base statistics, that is, counters and value statistics for example, log file switches and process memory allocated. Metrics such as physical reads per minute are also captured. Then the ASH data is reduced by a factor of ten by storing to disk a random sample of the in-memory data. The examples on this page do not represent the complete list. Snapshot 1 In-memory 7: Snapshot 2 statistics 8: Snapshot 3 9: Snapshot 4 9: Snapshots are used for computing the rate of change of a statistic.

Because internal advisories rely on these snapshots, be aware that adjustment of the interval setting can affect diagnostic precision. Snapshots in Real Application Clusters are captured at roughly the same time. Taking manual snapshots is supported in conjunction with the automatic snapshots that the system generates.

Manual snapshots are expected to be used when you want to capture the system behavior at two specific points in time that do not coincide with the automatic schedule. Using the Automatic Workload Repository page, you can: In general, snapshots are removed automatically in chronological order.

Snapshots that belong to baselines are retained until their baselines are removed or expire. The space consumption depends mainly on the number of active sessions in the system. A sizing script, utlsyxsz. The awrinfo. AWR handles space management for the snapshots. Every night the MMON process purges snapshots that are older than the retention period. You can use this procedure to change: The default is eight days; the minimum is one day. The minimum value is 10 minutes, the maximum is years, and the default value is 60 minutes.

You are allowed to specify the following values: Specify NULL to keep the current setting. Under exceptional circumstances, automatic snapshot collection can be completely turned off by setting the snapshot interval to 0.

The automatic collection of the workload and statistical data is stopped and much of the Oracle self-management functionality is not operational.

In addition, you are unable to manually create snapshots. For this reason, Oracle Corporation strongly recommends that you do not turn off automatic snapshot collection. There are times when you may want to collect snapshots before or after particular events that do not match the automatic collection periods. These events could be test workloads or problem events that you can trigger.

For example, you can find procedures for managing snapshots and baselines in this package. The procedures shown are only a few of the procedures provided.

Most of the procedures are used by Enterprise Manager to manage the Automatic Workload Repository, and you seldom need to use the procedures directly. The report contains general information about the overall behavior of the system over a time period defined by two snapshots. On this page, click the link corresponding to the number of snapshots. This opens the Snapshots page.

On the Snapshots page, select the beginning snapshot, select View Report from the Actions drop-down list, and click Go. On the View Report page, select the ending snapshot and click OK. The awrrpt. The script prompts for the following report options: Entering the number of days shows you the most recent snapshots being taken. The user-specified file into which the report is written.

The report contains the same information whether it is produced as a text or as an HTML report. The purpose of the first section is to highlight the most significant issue. The additional sections of the AWR report has detailed information that helps diagnose the issues that are shown in the first section. From the Snapshots page, select the first snapshot of the first period.

A wizard guides you to select the ending snapshot of the first period and the two snapshots of the second period. A review page is displayed as the last step of the wizard. Click finish to generate the Compare Periods report. You can also generate a Compare Periods report over baselines that you have already defined. After at least two baselines are created, you can click the number of the baseline on the Automatic Workload Repository Baselines page.

From this point, you can perform the Compare Periods operation. Just follow the Compare Periods wizard to select both the baselines, and click Finish. Compare Periods: Use the Workload Repository Compare Periods report to identify detailed performance attributes and configuration settings that are different between two time periods.

For example, if the application workload is known to be stable for a given time of day, but the performance on Tuesday was poor between Based on the changes reported between these two time periods, the cause of the performance degradation can be accurately diagnosed. The two time periods selected for the Workload Repository Compare Periods report can be of different duration because the report normalizes the statistics by the amount of time spent on the server for each time period and presents statistical data ordered by the largest difference between the periods.

Results Oracle University and Compueducacion, S. Results This slide shows a portion of the results of the Compare Periods operation, which identifies statistical differences between two snapshot periods.

This report compares the same workload executed against different tablespace configurations over the elapsed time. Comparison can be made either on a per second or a per transaction basis.

Because the workload is the same in each period, a per transaction comparison is appropriate. The first period shows more resources used in almost every area, than during the second period. The bar graphs indicate the proportional number for those metrics compared to the other time period. On the General tabbed page, you can also display the general statistics per second instead of per transaction as shown in the slide.

Simply select the corresponding value in the View Data field. Clicking the Report link on this page displays an HTML report comparing the two periods, showing differences in areas such as wait events, OS statistics, services, SQL statistics, instance activity, IO statistics, and segment statistics.

If the sizes of the two time periods are different, the data is normalized over DBTIME before calculating the difference so that periods of different lengths can be compared. Report When you click the Report tab on the Compare Periods: Results page, you generate the Workload Repository Compare Periods report.

In addition, the Compare Periods report shows you a configuration comparison for both time periods. The header information of the report is shown in the slide. This report was taken over two periods with the same elapsed time, and we are told that the same workload script was run in each period. In this example the DB time is significantly reduced in the second period. A change that produces a performance benefit is not always so clear.

The main diagnostic sections are shown in this and following slides. The other sections of the report have more detailed information on various performance areas that you will use when the main section indicates that there is a problem in that area.

You can also generate a report with the same information by using the awrddrpt. Load Profile The load profile is very useful in comparing two periods.

It helps to isolate the differences in workload that may contribute to differences in the performance. In this report, the workload script is identical in both periods.

About Oracle DBA Performance Tuning Training Course…

Only the database configuration has changed. We can see that DB time per second and per transaction are reduced. Logical reads, Block changes, Physical reads, and Physical writes. The transactions per second indicate that more work was accomplished in the same amount of time. This example has been designed to show a change that clearly produces a performance benefit. Often a change in one area will show a mixed benefit. Reducing the waits in one area, may cause contention and waits in another area.

Top Events Every instance, even well-tuned instances, will have a set of top wait events. These wait events are pointers to areas that will be the most beneficial to tune. The concern that was seen in the first period was the large percent of DB time spent in buffer busy waits.

This wait event overshadowed all other wait events. In the second period we can see that buffer busy waits are no longer in the top wait events. We already know from the previous sections of the report that performance has improved.

Oracle. Учебный курс Oracle Database 11g: Performance Tuning DBA Release 2

In the second period, free buffer waits appeared and log file sync increased both in total time and percent DB time. This observation should lead to investigation of the causes and possible remedies for these waits.

The next step would be to examine the detail sections of this report related to these wait events. The persistent portion of AWR is the snapshots. The information included in each snapshots is controlled by: The snapshot retention time d. Defining the Problem Problems can arise at any time. A proactive DBA watches for problems and corrects them before they are noticed by users.

In the past, the discovery and definition step has been tedious and frequently dependent on listening to user feedback. User feedback is important, but is often subjective and not reproducible. In Oracle Database 11g, many of the following information sources can be viewed from the Enterprise Manager interface: Changes can point to issues before they become noticeable to users.

These are the signs of an overloaded system. Have these changed? Do not overlook system- and application-specific logs. Statspack reports point to components where the greatest waits and the greatest use of resources occur. ADDM goes further by focusing on those components with the greatest potential benefit. This question is not always easy to answer. Improperly sized memory components an instance configuration issue can lead to excessive swapping in the OS.

Poor disk configuration can appear to be an instance configuration problem, causing a large redo file waits or commit waits, and other problems. Eliminate possibilities. The differences can guide you to the actual problem. A higher than normal average wait time on a particular tablespace, could be due to: A file is on a slow drive or an improper RAID configuration.

The OS is busy with other files on the same drive or partition. Determine the scope of the problem to focus your efforts on the solutions that provide the most THESE benefit. Setting the Priority Determine which problem to tune first. In the performance reports, you see many statistics; even a well-tuned database shows a set of top wait events.

The Oracle server provides a set of wait event statistics for processes that are idle or waiting. The Oracle server also records CPU utilization for processes that are running.

To determine the impact of a particular event, it must be compared with the overall time spent. Each request to the database server has a response time consisting of a wait time and a service time. The service time is the time spent actively working on the request CPU time.

The wait time is by definition the time waiting for any reason. Both service time and wait time may be tuned. To tune the service time something has to change: Wait times can be tuned by reducing contention for the resource where the wait is occurring. Each server process is typically in one of three states: Top 5 Timed Events The top wait events always have some values. In example shown in the slide, the users are complaining of slow response time.

The instance is supposed to use CPU and not wait. This set of diagnostics may mean that the instance is CPU bound. As pointed out earlier, performance tuning can either reduce wait time or service time. In this case, the service time needs to be reduced. To reduce service time, SQL is the usual area to be examined. Setting the Priority: Example The top wait events did not give a clear direction. So you continue with the time model to find which areas are consuming the DB time.

You can determine the top-priority tuning tasks by comparing the time spent in various waits and tasks with the overall wait time and service time. Both major tools report the Time Model Statistics to guide your tuning efforts. The time spent in user calls is Just from this limited view, the wait times for the SQL execution are significant, and would lead you to examine the wait statistics related to the SQL execution and the SQL reports to identify individual SQL statements for tuning.

SQL will always take some time to execute. Therefore, the actual improvement may be much less, depending on the amount of improvement you can get from that area.

In this example, a single SQL statement is responsible for almost all of the instance activity. SQL tuning issues includes poorly written SQL, ineffective use of indexes, access path costs, and sorting. SQL Tuning Workshop..

These are all poor use of the database by the application. Many resources can only be accessed by only one process at a time.

These changes may lead to a degradation in performance. The proactive DBA will capture and save statistics sets from when the database is performing acceptably, to compare with statistics when the database performance is poor to identify the differences.

Patches, upgrades, new hardware, or changes to the instance parameters can change the performance of the database. Sometimes a change improves performance in one area and causes another area to degrade. Database configuration Oracle University and Compueducacion, S.

Tuning Life Cycle Phases Tuning will follow the general methodology in all of the life-cycle phases. Different phases of the life cycle will have somewhat different approaches. Application Design and Programming Whenever possible, you should start tuning at this level.

With a good design, many tuning problems do not occur. Use a development and test database instance for proof of concept, and to check the performance of various design alternatives.

Database Configuration The testing phase is a continuation of development, with more realistic tests that use production hardware and operating system. Adding a New Application to an Existing Database When adding a new application to an existing system, the workload changes. You should accompany any major change in the workload with performance monitoring. Troubleshooting and Tuning Follow the methodology. Use a test instance to determine whether the solution eliminated the bottleneck.

Tuning During the Life Cycle Tuning during the life cycle involves two courses of action: During the design, development, and testing phases tuning is mostly proactive; that is, scenarios and test cases are designed and tested. The results are measured and compared against other configurations. In the deployment and production environments, the tuning is mostly reactive.

The need for hypothetical loads is removed as actual users and workloads are created, but the ability to anticipate problems also diminishes. You can monitor the database instance to observe changes and trends in performance metrics. From the information you gather by monitoring, you may be able to mitigate performance issues before they are noticed by users. The DBA may be involved in tuning from the earliest stages of design and development.

It is less expensive to correct bugs and performance problems early in the life cycle than later. The differences in tuning the later phases of the life cycle are primarily in what is allowed.

However, a design change to improve performance may warrant a change request to the application vendor or development team. Application Design and Development The tuning methodology that you follow during the design and development phases focuses on the common bottlenecks of any system: Very early in the design, the major functions of the application are known. The level of normalization of the data has serious performance consequences.

Over-normalization can lead to large multiway joins that can use all of the available host resources. Under-normalization brings another set of problems: The solution requires a fully normalized design and careful denormalization with built-in consistency checks. Choosing the proper data structures, such as partitioned tables for large data sets, can provide large performance benefits.

The design should avoid resource contention to increase scalability.

The major reports required for the application should be prototyped and tuned for expected run times. High-volume functions, in terms of either data or executions, should also be prototyped.

Each of these potential bottlenecks will have test cases. These test cases are tuned with the same methodology that is used in the production database: Database Configuration The testing phase allows tuning at a deeper level.

Database Configuration The test phase allows you to test at a deeper level. The test case should exercise the application functions, expected loads, and stress tests of improbable loads. These kinds of tests can give valuable insight into the best physical layouts, and the best OS and hardware configurations. It is important to monitor hot spots, even on fast disks. You should plan the data configuration to enable a shorter recovery time and faster data access.

Incorporate the business requirements for recovery time and availability as much as possible to allow for the overhead of these requirements. Test with loads that exhaust the machine resources. These tests identify the most used resources. These are the resources that limit the scalability of the system. The DBA uses the time model and wait events at each phase to identify the bottlenecks and tuning sessions to fix the bottlenecks at each level.

Deployment When a new application is initially deployed, the performance expectations are often different than reality. There are two variations to consider here: The new application on a new database has no baseline, so the tuning is based on current performance. Generate regular performance reports and save them as baselines. As the application grows in data set size or number of users, compare new performance reports to the previous reports. This allows you to tune before the performance degrades to an unacceptable level.

When a new application is added to an existing database, compare baseline performance reports from before and after the application is deployed. These reports show the resources that the new application is using and possible contention for resources with the existing applications. You need to know: You are looking for possible problems before they are apparent to users. Something has gone wrong: A report that ran in minutes is now taking hours, response time has increased and the users are complaining, backups are not finishing in the allotted time.

Are there more users? Is there a new report or application running?This method allows the data to clearly point to the performance issues. User feedback is important, but is often subjective and not reproducible. Example Oracle University and Compueducacion, S. Views that include microsecond and millisecond timings are listed in the slide.

The transactions per second indicate that more work was accomplished in the same amount of time. Just follow the Compare Periods wizard to select both the baselines, and click Finish. Setting the Priority:

DESSIE from Lewisville
Look through my other posts. One of my hobbies is luge. I do relish exploring ePub and PDF books sadly .
>