Expression Filter in the 2.0 schema

Until recently I used a performance mapper to get consecutive samples for my custom monitors or rules. This is not optimal since the data is transformed into performance data.

When taking a look at the Web Application Availability monitoring I found out that the Expression Filter with the 2.0 schema version has a new element called SupressionSettings.

This is great! You can with this set how many matches that is needed to output the data to the next module. You can also set the total number of samples or a time limit for the samples.

I don’t know how I managed to miss this, but hopefully with this post, I’ll help some else not to miss this possibility.

Information from MSDN about the SupressionSettings element in the System.ExpressionFilter module:

Element Type Overridable Description
MatchCount Integer False Required element. Indicates how many positive matches the expression filter requires before outputting a data item. A value of 1 or 0 here defaults to the original behavior of the Expression Filter which is to output on all matches.
SampleCount Integer False Optional element. Indicates how many total samples (both positive and negative) to store while calculating matches. This value must be greater or equal to the match count. If it is not, or if it is missing, the sample count is set equal to the match count (that is, only consecutive matching samples will trigger output).
WithinSeconds Integer False Optional element. Indicates the time period during which a match increments a repeat count from the current item. This means that there need to be MatchCount matches of the expression within WithinSeconds in order for the Expression Filter to produce a data item.  If this parameter is missing, set to zero, or SampleCount is non-zero, it defaults to the MatchCount/SampleCount behavior.
Authoring, SCOM 2012

Customize your log file monitoring


A customer wants to monitor a text log file for an event. There is also a correlated event that will indicate a good state. There shouldn’t be any alert unless no correlating event has been added to the log file within an hour.


My first thought was to create a scripted monitor to solve this. But after looking at the different log file monitor types that is available in the library I realized that you could, with some effort, create my own customized log file monitoring to solve this problem. Since there will be a correlated event it is possible to user a monitor instead of a rule.


Before starting with the MP let’s take a look at an example. In the MP “System.ApplicationLog.Library” and the MonitorType “System.ApplicationLog.GenericLog.MissingCorrelatedEventSingle2StateMonitorType” there is some interesting information.


We will need “log readers” and filters for matching the events with an error string. To correlate the events we need a filter for that too.

  • A “log reader” for the first matching event in the log file.
  • A “log reader” for the correlated matching event in the log file that generates Unhealthy state.
  • A “log reader” for when monitor goes back to Healthy state.
  • A filter for each of the “log readers”.
  • A filter for correlation.

A reference to the MP “System.ApplicationLog.Library” is needed. In my example I’ll use the alias “AppLog”.

Start with creating a new empty MP fragment in your solution.


Set an ID for the MonitorType and the states for the monitor. I’ll use a two-state.


Set the parameters used under the Configuration tag. Also set which parameters that should be overrideable.


Add all member modules. All modules, except for the correlating filter, are built-in.


Set the run order of the modules. In my example I will not use On Demand detection just Regular.


Create the Display string.


Create a new fragment for module types and put the correlator there after taking a peek at the System.CorrelatorAutoMissingCondition module.


We need two modules in this correlator. One that handles the “correlator count” and one to filter the output.


In my example I’ll use a static threshold for the matching values. Hence, there should be one “item count” for the first event and null for the last one.


Set the modules to run in the right order.


The modules are set up. Now it is time to create the monitor. I’ll use the Unit Monitor template.


Fill in the Alert description. Populate with fitting alert parameters.


Next, set the Monitor configuration. Open the window and populate all in-parameters.


Set Monitor operations states.


After building the solution you are done.

The unsealed Management Pack can be found here.

Note, this Management Pack is developed for a lab/test environment only.

Authoring, SCOM 2012

SchedulerFilter in Web Application monitoring

A customer asked to limit the monitoring for a web site during a specific timeframe.

In this Technet Forum thread it was suggested to add the SchedulerFilter to an existing “Web Application Transation Monitoring”.

In the following blog post I will show how to add a SchedulerFilter to a “Web Application Transaction Monitoring” object.

Start by exporting the MP and then edit it an XML editor. Look for the DataSource module type.


From the documentation at MSDN you can find the definition of the module. In my example I will filter the workflow to run every day between 02:45 AM and 03:10 AM.

<ConditionDetection ID="Schedule" TypeID="System!System.SchedulerFilter">
<ExcludeDates />

Add the filter module to the DataSource and add it to the Composition order.


Save the MP and import it into the Operations Manager MG. I recommend that you test this in a Lab/Test environment before implementing in a production environment.

An example can be found here.


  • This method will only work for the “Web Application Transaction Monitoring” not the “Availability Monitoring” template.
  • You can’t see this filter in the Operations Manager Console so update your Operations Manager documentation.


Authoring, SCOM 2012, Web Application Monitoring

Author Reports for SCOM part 1

Part 1: Create the report

  • Part 1: Create the report
  • Part 2: Design the report
  • Part 3: Import the report into Operations Manager

In this series of blog posts will demonstrate how to create a custom report for Operations Manager using Visual Studio. I will not go in to the pre-work and how to design your SQL query or SP.

In the first part of the series I will create a custom report using SQL Server Data Tools. SQL Server Data Tools is a feature of the SQL Server setup and can be installed using the SQL Server install media. (In the 2008 version of SQL it was called Business Intelligence Development Studio (BIDS).)

First off, open SQL Server Data Tools and create a new project. Select Report Server Project and name the solution and project.


In the Solution Explorer (to the top right) right-click and chose Add -> New Item …


Select the Report template and give the rdl-file a fitting name.


Next, add an Add New Data Source from the Shared Data Sources folder in the Solution Explorer.


Name the Data Source.


Click Edit.. and then add the instance name and the database name for the Data Warehouse.


The Connection string has now been populated. Click OK.


Next, it is time to add a Data Source to the Report. Right-click on Data Sources and select Add Data Source …


Set the name of the Data Source (use DataWarehouseMain without spaces if want to use the predefined Data Source) and select Use shared data source reference. Select the previous created Data Source.


Right-click on Datasets and select Add Dataset …


Set a name for the Dataset and select Use a dataset embedded in my report. Select the previously created Data source. Either paste your Query or use a Stored Procedure. In my example I will use a SQL Query.


After clicking OK the first Dataset is created.

In the next post I will cover how to set the Query-parameters and start designing the report.

For further reading about Report authoring for SCOM I recommend the Operations Manager Report Authoring guide:

The guide is for Operations Manager 2007 R2 but is still valid for the 2012 version.

Authoring, Reports, SCOM 2007 R2, SCOM 2012

Correct time zone in reports

I recently observed that one of my custom alert reports didn’t show the correct timestamp for each alert. After some troubleshooting I found out that this report didn’t compensate for time zones. Since data are stored as GMT in the Data Warehouse the time zone the control picker parameters needs to be remunerated for this.

Looking in the code snippet you have to paste into your report to get the smart parameters I found the function called “ToReportDate” which was what I was looking for.

My replacement in the Queryparameters looked like this:


You can read more about Report authoring here:

Authoring, Reports

Invoke-WebRequest in PowerShell discovery fail

Today I encountered an error with a TimedPowerShell discovery. The Invoke-WebRequest command was the cause as I found out in the Event Log.


The targeted server was running Windows Server 2012 with PowerShell version 3. Apparently there is know bug:

The workaround is to open Internet Explorer with the account executing the command. Since the SCOM agent uses “Local System”, the solution is to run it with PsExec:

After starting Internet Explorer once with “Local system” the discovery finished the script with out errors.


Authoring, SCOM 2012

Switch from VMWare to Hyper-V seminar

Don’t miss the whole day seminar on how to migrate to Microsoft Hyper-V the 3rd of October in Stockholm. Listen to our virtualization experts Niklas Åkerlund and Kristian Nese (MVP).



Extraction Rules in Web Application Transaction Monitoring

Extraction Rules can be useful when creating a synthetic transactions for a web request. If the page you need to monitor is using a unique session id that need to be passed on from the login request to the following request.

In this example the login form uses a unique identifier to prevent Cross-site request forgery (CSRF). This needs to be passed on to the following request.

Click the Add Monitoring Wizard from the Authoring pane.


Set a Name and select a Management Pack to save it in.


Enter an URL to proceed to the next step.


Choose one or more Watcher Nodes.


Click on Start Capture to start a recording.

Notice, there are some fulfilments to be met for the Internet Explorer window to be able to load the add-on running the web recorder. For example, IE10 is not support, you might need to run the IE window in 64-bit mode and enable third-party browser extensions (find out more here).


Browsing to a page will record the address as seen in the Web Recorder explorer bar.


I’ll create two recordings one for the login page and a second actually making the login. In the latter the form post variables will be displayed directly in the Web Recorder.


Now there are two recordings in the editor.


Pressing Run Test will display an error.


Taking a look at the source of the login page reveals the hidden csrf variable.


In the code a new session variable will be generated whenever entering the login page.


Now it’s time to use the Extraction Rules. You can locate the tab under Properties for the request in question.


Click the Add button to open the Add Extraction Rule window. From the source code we know that the variable entry starts with csrf” value=” and ends with a .


Now it is time to add that variable to the next request. Click Insert Parameter at the General tab to insert it to the Response Body.


Select the previously created parameter and insert it to the Response Body.


The test will is now successful.


In the Monitoring pane under the view Web Application State the state is Healthy.


For another example see the TechNet page:


SCOM 2007 R2, SCOM 2012, Web Application Monitoring

Regular expression support in System Center Operations Manager

Microsoft has published a new KB describing the Regular expression support in System Center Operations Manager 2007 and 2012.

In group calculation, PERL regular expression syntax is used and in expression filtering .Net regular expression syntax is used. Tables describing these functions are presented in the KB along with tables for comparison operators and wildcards used for accessing the SDK.

As I have mentioned in an earlier post there is a great way of testing your regular expressions at RegexPal.

The KB article can be found here.

Authoring, Knowledge Base, SCOM 2007 R2, SCOM 2012

System Center 2012 SP1 UR2 released

System Center 2012 SP1 UR2 has now been released and can be installed from Windows Update or downloaded from the Microsoft Update Catalog.

Issues fixed in this release for Operations Manager (from the KB page):

Issue 1

The Web Console performance is very poor when a view is opened for the first time.

Issue 2

The alert links do not open in the Web Console after Service Pack 1 is applied for Operations Manager.

Issue 3

The Distributed Applications (DA) health state is incorrect in Diagram View.

Issue 4

The Details Widget does not display data when it is viewed by using the SharePoint webpart.

Issue 5

The renaming of the SCOM group in Group View will not work if the user language setting is not “English (United States).”

Issue 6

An alert description that includes multibyte UTF-8 characters is not displayed correctly in the Alert Properties view.

Issue 7

The Chinese (Taiwan) Web Console displays the following message even after the SilverlightClientConfiguration.exe program is run:

Web Console Configuration Required.
Issue 8

The Application Performance Monitoring (APM) to IntelliTrace conversion is broken when alerts are generated from dynamic module events such as the Unity Container.

Issue 9

Connectivity issues to System Center services are fixed.

Issue 10

High CPU problems are experienced in Operations Manager UI.

Issue 11

Query processor runs out of internal resources and cannot produce a query plan when you open Dashboard views.

Issue 12

Path details are missing for “Objects by Performance.”

The supported installation order:

  1. Install the update on the server infrastructure:
    1. Management server or servers
    2. Gateway servers
    3. Reporting servers
    4. Web console server role computer
    5. Operations console role computers
  2. Manually import the management packs.
  3. Apply the agent update to manually installed agents, or push the installation from the Pending view in the Operations console.

After running the exe-file to unpack the files, apply the appropriate msp-file to each server. Lastly import the new MP’s.

Also the Linux and Unix monitoring MP has been updated to support the new version or Operations Manager and also some issues are fixed:

Issue 1

The Solaris agent could run out of file descriptors when many multi-version file systems (MVFS) are mounted.

Issue 2

Logical and physical disks are not discoverable on AIX-based computers when a disk device file is contained in a subdirectory.

Issue 3

Rules and monitors that were created by using the UNIX/Linux Shell Command templates do not contain overridable ShellCommand and Timeout parameters.

Issue 4

Process monitors that were created by the UNIX/Linux Process Monitoring template cannot save in an existing management pack that has conflicting references to library management packs.

Issue 5

The Linux agent cannot install on a CentOS or Oracle Linux host by using FIPS version of OpenSSL 0.9.8.

You can read more in KB2802159 found here.

The Linux and Unix monitoring MP can be downloaded here.

SCOM 2012, UR