Popular Searches

How to Set Up Monitoring to Alert on Windows High System Usage


One of the more overlooked tools in Windows is Perfmon, otherwise known as performance monitor. This utility has many overlooked abilities, one of them being the ability to alert on various metric conditions. In this article, we explore how to properly use the alerting ability of Perfmon with a high CPU usage.

What is Perfmon?

Available since the early days of Windows in various iterations, Performance Monitor is available as an MMC snap-in to Windows intended to assist in monitoring system usage and various performance metrics. The default view upon launching highlights a few different areas and real-time metrics.

  • Performance Monitor – Real-time viewing of metrics
  • Data Collector Sets – Defined collection of data over a given time interval
  • Reports – How to view the data collected in the Data Collector Sets

If Performance Monitor is not launched as an Administrator, its utility will be limited and you may not see the Data Collector Sets or Reports.

Viewing Metrics

When you first click on Performance Monitor you will be shown a moving line graph that defaults to %Processor Time. This, by itself, isn’t terribly useful as the data is a rolling value and really we want to know if there are adverse conditions.

You can add additional metrics to this graph, by clicking on the green plus and adding more metrics. Keep in mind that the value scale may not match between different data points and therefore may be of lesser use when combined on a single graph.

Data Collector Sets

Real-time data is useful, but not what we are ultimately looking for. How then do we alert on certain conditions, in this case, high CPU usage sustained over time?

This is where Data Collector Sets come in. After expanding Data Collector Sets, right-click on User Defined → New → Data Collector Set.

You will be presented with the option to name the set and whether to create the set from a template or to create the set manually. In this case, we need to manually create our configuration.

In this case, we are setting up a Performance Counter Alert. This will monitor a given counter and then we can tell the alert to take certain actions.

Since we are looking to monitor the total CPU percentage, it is most important to choose the correct metric to monitor. Here we are choosing Processor → _Total by clicking on “Add >>” next to the selected instance.

One problem is that you get all of the Processor _Total metrics. Ultimately, we just want the \\Processor(_Total)\\% Processor Time metric. To remove the others, select each one and click on the Remove button.

Unfortunately, the way that the remove works, you can’t just click on the remove button multiple times as it moves the selected item back to the top each time. Select each metric individually and click on Remove.

We need to now tell the performance counter at what point the alert should start and in this case, we are looking to have it alert only when above 95.

Finally, save and close the Data Collector Set.

Configuring Alerts

With our default configuration out of the way, we need to now configure what Alert Action is going to take place. There are two ways to set the alerts, Alert Action and Alert Task. Select your User Defined → High CPU Usage data collector set, right-click on the default DataCollector01 entry and choose Properties.

The easiest way to start monitoring entries is to navigate to the Alert Action tab and click on the checkbox for “Log an entry in the Application event log”. You also have the convenient option to start a different data collector set when the criteria for an alert are met. This way you can collect additional logging as needed. Here though, we are just going to log an entry.

Configuring an Alert Task

This is all well and good, but ultimately we are not getting an actual alert in this case, just a new event log entry. On the Alert Task tab, we can tell this Data Collector to start a scheduled task, and send some parameters, which can then perform whatever alert actions we want it to. To make this work, we need to do two things. Create the script to run and the scheduled task itself.

Logging Script

Below is a very simple logging script. We read in the alert metrics outputted by the Alert Task and send those results to a log file.


$Date      = $args[0]
$Threshold = $args[1]
$Counter   = $args[2]

$Value = "[{0}] {1} {2} | {3}" -F $Date, 'High CPU', $Threshold, $Counter

Add-Content -Value $Value -Path 'C:\\HighCPUAlert.log'

Scheduled Task

Here we need to create the scheduled task that will actually run the script upon invocation by the Data Collector. We are using PowerShell to create the scheduled task and using PowerShell 7 as the run-time, as denoted by the pwsh.exe executable.

$Params = @{
    "Action"    = New-ScheduledTaskAction -Execute "pwsh.exe" -Argument "-NoProfile -File C:\\HighCPUAlert.ps1 $(Arg0)"
    "Principal" = New-ScheduledTaskPrincipal -UserId "LOCALSERVICE" -LogonType ServiceAccount
    "Settings"  = New-ScheduledTaskSettingsSet

New-ScheduledTask @Params | Register-ScheduledTask 'HighCPUAlert'

Until PowerShell 7 is formally released, the executable may be pwsh-preview.exe.

Configuring Alert Task

Finally, we need to configure the Alert Task on the Data Collector. To do this navigate to the properties again of DataCollector01 and enter in the following details.

We quote the task arguments because they come in as strings to PowerShell. Therefore, by quoting them, we are making it easy to separate the arguments out by index, i.e. $arg[0] or $arg[1].

Once you click on save you may be prompted for a credential, this should be a user with Administrator access.

Running the Data Collector

By right-clicking on the Data Collector Set, High CPU Usage and selecting Start, you will begin the collection process. If you monitor the Scheduled Tasks, you will see the newly created scheduled task periodically run depending on the monitoring interval and threshold set.


By using the built-in monitoring tools of Windows, you can structure some useful and powerful monitoring solutions around core utilities and PowerShell. With this flexibility, you will be able to get to the bottom of nearly any problem that can be diagnosed via metric data collection!

Adam Bertram Adam Bertram
Adam Bertram is a 20+ year veteran of IT and an experienced online business professional. He’s a consultant, Microsoft MVP, blogger, trainer, published author and content marketer for multiple technology companies. Catch up on Adam’s articles at adamtheautomator.com, connect on LinkedIn, or follow him on Twitter at @adbertram. Read Full Bio »

The above article may contain affiliate links, which help support CloudSavvy IT.