Available Physical Memory Lower Than Total

By: Tibor Nagy | Updated: 2011-02-18 | Comments (9) | Related: More >Performance Tuning

'Total Physical Memory is much more than Available Physical' It could not be otherwise. Total physical memory is the total amount of memory that Windows can use. A portion of this will be in active use by the OS and applications. The remainder is available for future use by any application that needs it and is labeled as 'Available'.


Free MSSQLTips Webinar: Gathering and using historical data to solve SQL Server performance issues

In this webinar we look at how to collect and use historical data from SQL Server to help troubleshoot and solve performance problems.

It's all imaginary so some combination of programs is saying it might some day use that much. Don't worry unless the swap space grows past your ram size or active + wired ram is greater than 75% of physical RAM. (3GB in your case) Basically, this idea makes memory handling faster as the system is running. However when i had rebooted i still had only 2.75 total physical memory and now 1.83GB available physical memory. So i dont think it changed all that much. My 32bit vista inspiron has 3.5gh total physical memory and i never changed any settings with that.

  • Thus we have memory less than 32 GB available (which is expected). Its a good idea to have homogeneous clusters. However, you can make use of Config Groups here in Ambari based on different hardware profiles. You can creates 2 Config Groups (CG) where each CG has one node.
  • When the counter Memory Available MBytes has a value less than 10% of the physical memory installed, then it increases the system's dependence on the disk subsystem. If the disk subsystem is overwhelmed, then system-wide delays can occur.

Problem

We experience regular slowdowns on our MS SQL Server database. We would like to start the root cause investigation by examining memory bottlenecks. What is your recommendation to uncover memory bottlenecks in SQL Server?

Solution

There are many reasons for memory related performance problems on a MS SQL Server instance, the source can be either a limit in virtual or physical memory, memory pressure from other applications or inside the SQL Server. Fortunately enough, we have many built-in tools which can be used to track down the root cause.

Performance Monitor

Performance Monitor is part of the Microsoft Management Console, you can find it by navigating to Start Menu -> Administrative Tools group. First, I would like to emphasize that the values below can vary from system to system, depending on the amount of memory, system volume, load, etc. I suggest saving metrics of the system under normally working load so you have a reference of the typical values. As a starting point, review the Memory: Available [M, K] Bytes performance counter. Low amount of available memory might indicate external memory pressure. A rule of thumb is to look into this counter when the value drops below 5% of all available memory. If there are memory-related errors, you will have to look for the key memory consumers on the system. They can be identified by using the Process: Working Set performance counter. The total physical memory in use can be approximately calculated by summing the following counters:

  • Process object, Working Set counter for each process
  • Memory object
    • Cache Bytes counter for system working set
    • Pool Nonpaged Bytes counter for size of unpaged pool
    • Available Bytes counter
    • Modified Page List Bytes counter
The Process: Private Bytes counter should be around the size of the working set (Process: Working Set), otherwise the memory is paged out.

Unfortunately these performance counters do not take into account AWE mechanisms. If AWE is enabled, you will need to look at the memory distribution inside SQL Server using DBCC MEMORYSTATUS command or Dynamic Management Views (see below).

You need to find out whether the page file has enough space for the virtual memory. Take a look at the following counters: Memory: Commit Limit, Paging File: %Usage, Paging File: %Usage Peak. You can estimate the amount of memory that is paged out per process by calculating the difference between Process: Working Set and Process Private Bytes counters. High Paging File: %Usage Peak can indicate low virtual memory event. A solution can be to increase the size of your page file. High Paging File: %Usage is a sign of physical memory over commitment so you should also look for potential external physical memory pressure.

The following performance counters on SQL Server: Buffer Manager object can also indicate memory pressure:

  • High number of Checkpoint pages/sec
  • High number of Lazy writes/sec
  • High number of Page reads/sec
  • Low Buffer cache hit ratio
  • Low Page Life Expectancy

For further reading on this topic, check out these tips:

DBCC MEMORYSTATUS command

You can use the DBCC MEMORYSTATUS command to check for any abnormal memory buffer distribution inside SQL Server. The buffer pool uses most of the memory committed by SQL Server. Run the DBCC MEMORYSTATUS command and scroll down to the Buffer Pool section (or Buffer Counts in SQL Server 2005), look for the Target value. It shows the number of 8-KB pages which can be committed without causing paging. A drop in the number of target pages might indicate response to an external physical memory pressure.

If the Committed amount is above Target, continue investigating the largest memory consumers inside SQL Server. When the server is not loaded, Target normally exceeds Committed and the value of the Process: Private Bytes performance counter.

If the Target value is low, but the server Process: Private Bytes is high, you might be facing internal SQL memory problems with components that use memory from outside of the buffer pool. Such components include linked servers, COM objects, extended stored procedures, SQL CLR, etc. If the Target value is low but its value is close to the total memory used by SQL Server, than you should check whether your SQL Server received a sufficient amount of memory from the system. Also you can check the server memory configuration parameters.

Than

You can compare the Target count against the max server memory values if it is set. Latter option limits the maximum memory consumption of the buffer pool. Therefore the Target value cannot exceed this value. Also the low Target count can indicate problems: in case it is less than the min server memory setting, you should suspect external virtual memory pressure.

My last recommendation on the DBCC MEMORYSTATUS output is to check the Stolen Pages count. A high percentage (>75%) of Stolen Pages compared to Target can be a sign of internal memory pressure.

Further reading on Microsoft Support pages:

Dynamic Management Views

You can use the sys.dm_os_memory_clerksdynamic management view (DMV) to get detailed information about memory allocation by the server components in SQL Server 2005 and 2008. Some of the DMVs provide similar data as the DBCC MEMORYSTATUS command, but their output is much more programmer friendly. For example the following query returns the amount of memory SQL Server has allocated through the AWE mechanism.

You can also check the amount of memory that is consumed from outside of the buffer pool through the multipage allocator.

Physical Memory Available

If you are seeing significant amounts of memory (more than 100-200 MB) allocated through the multipage allocator, check the server configuration and try to identify the components that consume the most memory by using the following query:

In SQL Server 2008, you can query the sys.dm_os_process_memory DMV to retrieve similar data. Look for the columns physical_memory_in_use, large_page_allocations_kb, locked_pages_allocations_kb and memory_utilization_percentage. The process_physical_memory_low = 1 value indicates that the process responds to physical memory low notification from the OS.

Check the main consumers of the buffer pool pages:

In SQL Server 2005 and 2008, internal clock hand controls the relative size of caches. It launches when the cache is about to reach its maximum. The external clock hand moves as the SQL Server gets into memory pressure. Information about clock hands can be obtained through the sys.dm_os_memory_cache_clock_hands DMV. Each cache has a separate entry for the internal and the external clock hand. If the rounds_count and removed_all_rounds_count values are increasing then your server is under memory pressure.

You can get additional information about the caches by joining with the sys.dm_os_cache_counters (Please note that the amount of pages is NULL for USERSTORE entries):

Virtual Address Space consumption can be tracked by using the sys.dm_os_virtual_address_dump DMV. If the largest available region is less than 4 MB then your system is most likely under VAS pressure. SQL Server 2005 and 2008 actively monitor and respond to VAS pressure.

You can also use the following DMVs for memory troubleshooting both in SQL Server 2005 and 2008:

  • sys.dm_exec_cached_plans
  • sys.dm_exec_query_memory_grants
  • sys.dm_exec_query_resource_semaphores
  • sys.dm_exec_requests
  • sys.dm_exec_sessions
  • sys.dm_os_memory_cache_entries

There are several new DMVs in SQL Server 2008 which make us easier to gather memory diagnosis information. I would like summarize these new DMVs for memory troubleshooting:

  • sys.dm_os_memory_brokers provides information about memory allocations using the internal SQL Server memory manager. The information provided can be useful in determining very large memory consumers.
  • sys.dm_os_memory_nodes and sys.dm_os_memory_node_access_stats provide summary information of the memory allocations per memory node and node access statistics grouped by the type of the page. This information can be used instead of running DBCC MEMORYSTATUS to quickly obtain summary memory usage. (sys.dm_os_memory_node_access_stats is populated under dynamic trace flag 842 due to its performance impact.)
  • sys.dm_os_nodes provides information about CPU node configuration for SQL Server. This DMV also reflects software NUMA (soft-NUMA) configuration.
  • sys.dm_os_sys_memory returns the system memory information. The ‘Available physical memory is low' value in the system_memory_state_desc column is a sign of external memory pressure that requires further analysis.

Resource Governor

The Resource Governor in SQL Server 2008 Enterprise edition allows you to fine tune SQL Server memory allocation strategies, but incorrect settings can be a cause for out-of-memory errors. The following DMVs can provide information about the Resource Governor feature of SQL Server 2008: sys.dm_resource_governor_configuration, sys.dm_resource_governor_resource_pools and sys.dm_resource_governor_workload_groups

SQL Server ring buffers

Another source of diagnostic memory information is the sys.dm_os_ring_buffers DMV. Each ring buffer records the last number of notifications. You can query the ring buffer event counts using the following code:

Here is a list of ring buffers of interest:

  • RING_BUFFER_SCHEDULER_MONITOR: Stores information about the overall state of the server. The SystemHealth records are created with one minute intervals.
  • RING_BUFFER_RESOURCE_MONITOR: This ring buffer captures every memory state change by using resource monitor notifications.
  • RING_BUFFER_OOM: This ring buffer contains records indicating out-of-memory conditions.
  • RING_BUFFER_MEMORY_BROKER: This ring buffer contains memory notifications for the Resource Governor resource pool.
  • RING_BUFFER_BUFFER_POOL: This ring buffer contains records of buffer pool failures.

SQL Server Profiler

This tool is part of the SQL Server program suite, you can find it in Start Menu -> {your MS SQL Server version} -> Performance Tools. Additional information can be found in the articles Working with SQL Server Profiler Trace Files and Creating a Trace Template in SQL Server Profiler.

Log File Viewer in SQL Server Management Studio

You can check the SQL Server error log and Windows application and system logs in one place. For more information about this tool please read this article.

Next Steps
  • Collect and compare performance counters.
  • Study DBCC MEMORYSTATUS output.
  • Analyze DMV information.
  • Check Resource Governor configuration.

Last Updated: 2011-02-18

Total Physical Memory Available




About the author
Tibor Nagy is a SQL Server professional in the financial industry with experience in SQL 2000-2012, DB2 and MySQL.
View all my tips

Available Physical Memory Lower Than Total Pressure



One of the least understood areas of Analysis Services performance seems to be that of memory limits – how they work, and how to configure them effectively. I was going to start with an overview of Performance Advisor for Analysis Services (aka, PA for SSAS), but during the beta I ran across such a good example of the software in action in diagnosing a problem with memory, I think it’ll work just as well to go through that issue step-by-step.

First some background. SSAS has two general categories of memory, shrinkable and nonshrinkable, and they work pretty much like it sounds. Shrinkable memory can be easily reduced and returned back to the OS. Nonshrinkable memory, on the other hand, is generally used for more essential system-related stuff such as memory allocators and metadata objects, and is not easily reduced.

What Is Total Physical Memory

PA for SSAS shows both types on the SSAS Memory Usage chart:

If you switch to “Memory by Category” mode via right-click menu, you get a more detailed breakdown. Just like the chart above, the memory with hash marks is nonshrinkable:

SSAS uses memory limit settings to determine how it allocates and manages its internal memory. MemoryLowMemoryLimit defaults to 65% of the total available physical memory on the machine (75% on AS2005), and MemoryTotalMemoryLimit (also sometimes called the High Memory Limit) defaults to 80%. This is the total amount of memory that the SSAS process itself (msmdsrv.exe) can consume. Note on the charts above, the orange line represents the Low memory limit and the red line the Total memory limit. This makes it very easy to see how close SSAS’s actual memory consumption is to these limits. Once memory usage hits the Low limit, memory cleaner threads will kick in and start moving data out of memory in a relatively non-aggressive fashion. If memory hits the Total limit, the cleaner goes into crisis mode… it spawns additional threads and gets much more aggressive about memory cleanup, and this can dramatically impact performance.

Between the Low and Total limits, SSAS uses an economic memory management model to determine which memory to cleanup. This model can be adjusted by some other parameters related to the memory price, but I’m not going to get into those here.

When it comes to memory management SSAS is entirely self-governing, and unlike SQL Server it doesn’t consider external low physical memory conditions (which can be signaled by Windows) or low VAS memory. This may be partly because SSAS is already much more subject to Windows own memory management than is SQL Server, since Analysis Services databases are a collection of files on the file system and can make heavy use of the file system cache, whereas SQL Server does not. This means that you can have a significant amount of SSAS data loaded into RAM by virtue of the file cache, and this will not show up as part of the SSAS process memory, or in any way be governed by the SSAS memory limits. (For this reason we now show system file cache usage and file cache hit ratios on the dashboard as well so you can see this.)

On a side note, a byproduct of this design is that if available memory is low and other processes on the server are competing for memory resources, SSAS will pretty much ignore them, and look only to its own memory limit settings to determine what to do. This means, for example, that memory for a SQL Server using the default “Minimum server memory” setting of 0 can be aggressively and dramatically reduced by an SSAS instance on the same machine with default memory limits, to the point where the SQL Server can become unresponsive. SQL Server is signaled by Windows that memory is low, and responds accordingly by releasing memory back to the OS, which is then gobbled up by SSAS.

Graphical Anomaly, or Serious Memory Problem?

Very early in our beta for PA for SSAS, a beta tester reported what they thought was an odd graphical anomaly in the bottom right of the SSAS Memory Usage chart:

To the untrained eye, this would certainly look like an anomaly, but once you’ve seen what this chart is supposed to look like, it’s fairly obvious that it’s an “H” on top of an “L”.

This particular tester had been eager to get the beta because they had suffered from poor performance and other strange behaviors for quite some time. They had actually already scheduled Microsoft to come in and help troubleshoot the issues.

What exactly did this “anomaly” mean? What caused it, how it was resolved, and what was the net effect on performance, if any? All great questions… and this will be the subject of my next post, so stay tuned ;-)

If anyone has any guesses in the meantime feel free to comment. The first person to get it right will win a free license of Performance Advisor for Analysis Services! (that’s a $2495 value)