Monday, August 6, 2018

#Azure and #LogAnalytics: Audit current workspaces

In my last post, I mentioned that I have been helping a customer consolidate their workspaces and shared a script that can be used to bulk move workspaces from one subscription to another. Before you can move the workspaces, you will need to know what solutions have been added to each workspace, as some solutions cannot be moved. These solutions will need to be removed before moving the workspace, and then added back afterwards.

This script in its current form will:

  • Connect to the subscription specified
  • Retrieve all the workspaces in the subscription
  • Retrieve the enabled solutions in the workspace
  • Write a log file containing the subscription name, workspace name, solution name, along with the pricing tier of the workspace.

Disclaimer: This script has been tested in a lab environment only. It is highly recommended that you test the script in a lab environment and adapt for your environment before using in production.

Update the bits in yellow as appropriate for your environment.

###########################################################################

#

# Get-LogAnalytics-Workspaces.ps1

#

#

# This script will retrieve all the workspaces in a specific subscription along with solution info

#

#

###########################################################################

# Variables

$Subscription = "Name of subscription you want to query"

# We are going to create a log file for this process.

# These variables determine the log file name and location

$ReportDate = Get-Date -format "yyyy-M-dd"

$rnd = Get-Random -minimum 1 -maximum 1000

$ReportLocation = "C:\Temp"

$ReportName = "$ReportDate-$rnd-Azure-LogAnalyticsWorkspace-Solutions-$Subscription.csv"

$SavetoFile = "$ReportLocation\$ReportName"

# Create the folder if it doesn't exist

If (!(Test-Path $ReportLocation)){

New-Item -ItemType directory -Path $ReportLocation

}

# Delete any previous versions of the report for in case

If (Test-Path $SavetoFile){

Remove-Item $SavetoFile

}

# Open the content

$body = @()

$body += "Subscription,WorkspaceName,ResourceGroup,PricingTier,SolutionsEnabled"

# Connect to the subscription

Set-AzureRmContext -Subscription $Subscription

# Retrieve all the work spaces

$workspaces = Get-AzureRmOperationalInsightsWorkspace

# Step through the workspaces

foreach ($workspace in $workspaces)

{

$workspacename = $workspace.Name

$wsrg = $workspace.ResourceGroupName

$sku = $workspace.Sku

write-host $workspacename

# Retrieve the enabled solutions for the workspace

$solutions = (Get-AzureRmOperationalInsightsIntelligencePacks -ResourceGroupName $wsrg -WorkspaceName $workspacename).Where({$_.Enabled -eq $true})

foreach ($solution in $solutions)

{

$sname = $solution.Name

write-host $sname

$body += "$Subscription,$workspacename,$wsrg,$sku,$sname"

}

}

# Write the log file

$body >> $SavetoFile

Friday, August 3, 2018

#Azure and #LogAnalytics: Move workspaces in bulk

I had a request from a customer recently to help them move all their Log Analytics workspaces into a new subscription, so wrote a quick script for this purpose.

This script in its current form will:

  • Connect to the specified subscription ($OldSub)
  • Get all the workspaces in that subscription
  • Move the workspaces to the specified subscription ($NewSub)
  • Prompt for confirmation for each workspace and run to completion for the workspace before starting with the next in the list
  • Write a log file of each workspace processed

Disclaimer: This script has been tested in a lab environment only. It is highly recommended that you test the script in a lab environment and adapt for your environment before using in production.

ETA: I had a reminder that I forgot to mention this. Some solutions cannot be moved with the workspace, and will need to be removed before the workspace can be moved.


Update the bits in yellow as appropriate for your environment.

############################################################################
#
#   Move-LogAnalytics-Worksspaces.ps1
#
#   Script by: Vanessa Bruwer
#
#
#   This script will move all the workspaces in a specific subscription
#   and assumes all workspaces are managed by resource manager
#
#
############################################################################

# Variables

$OldSub = "Name of the subscription currently hosting workspaces"
$NewSub = "Name of the subscription the workspaces need to be moved to"

$NewRG = "Name of the resource group in the new sub to manage the workspaces"


# We are going to create a log file for this process.
# These variables determine the log file name and location

$ReportDate = Get-Date -format "yyyy-M-dd"

$rnd = Get-Random -minimum 1 -maximum 1000
$ReportLocation = "C:\Temp"
$ReportName = "$ReportDate-$rnd-Azure-LogAnalyticsWorkspaceMoveLog.csv"
$SavetoFile = "$ReportLocation\$ReportName"


# Create the folder if it doesn't exist

If (!(Test-Path $ReportLocation)){
     New-Item -ItemType directory -Path $ReportLocation
}


# Open the content
$body = @()

$body += "WorkspaceName,WorkspaceID,OldSub,NewSub"

# Now the script work really starts


# Connect to the old subscription

Set-AzureRmContext -Subscription $OldSub


# Get the ID for the New Subscription

$NewSubID = (Get-AzureRmSubscription -SubscriptionName $NewSub).Id


# list all the workspaces in a subscription
$Workspaces = Get-AzureRmResource -ResourceType Microsoft.OperationalInsights/workspaces


foreach ($Workspace in $Workspaces)
{

$WSID = $Workspace.ResourceId
$WSName = $Workspace.Name


Move-AzureRmResource -DestinationSubscriptionID $NewSubID -DestinationResourceGroupName $NewRG -ResourceId $WSID

#write-host "$WSName has been moved to $NewSub"

$body += "$WSName,$WSID,$OldSub,$NewSub"


}

$body >> $SavetoFile

Monday, July 30, 2018

#OpsMgr: SNMP Management Pack Generator

One of the biggest complaints I’ve heard from customers over the years is that other monitoring tools allow one to import MIB files, and it is really hard to author management packs for SNMP-based devices, and they were not completely wrong.

The product group listened, and when System Center 2016 was released, the PG gave us a lovely new tool, the System Center Operations Manager - Extensible Network Monitoring Management Pack Generator tool, which is a free download. And, while it is old news, really, I’ve seen a number of posts announcing it, and a bit of talk about how to use the command line utility to convert an MP, but nothing talking about how to actually use the GUI tool to import a MIB and then create an MP, and so I thought I would do a quick dive into this wonderful tool.

After installation, you can find the GUI tool (SNMP_MP_Generator) in the install directory. On first launch, you will be asked to create an MP and provide a save location.

image

You can also import MIB files at this time – or do that later, as you need.

You will now have a blank management pack, as well as access to the MIB browser:

image


While the MIB browser is a nice feature, the main purpose of this tool is to help you build MPs.

You want to start by adding a device. If you right click on Project, you can choose your device type. In this example, I am going to build a management pack for monitoring Linux servers using SNMP for those scenarios where you cannot install the agent, so I am going to choose Custom Network Device.

image

First, we probably want to load the MIBs for the device we want to monitor, so it is easier to find the OIDs. Click on the Down Arrow button in the ribbon to locate the MIB files you want to import.

image

You can select a few MIBs at the same time, if needs be.


You can now populate the device details form using the MIB browser.

image

  1. Browse the MIB tree for the device/component
  2. Find the OID
  3. Copy and Paste the OID into the device details.

Populate the other details as required and Save.

You will now be able to add performance rules for this device, or even add child components.

image


Build it out as much as needed

image


Once you are ready to test your MP in your lab, you need to generate the MP from this project.

Note: the file you created and saved using this tool is a project file, and is not ready to be imported into Operations Manager. You have to Generate the MP.

To generate the MP, choose the icon that looks like an export option.

image

The management pack file will be saved into the same location as the project file you created, but without the PROJ denotation in the name.

This will produce an unsealed management pack, which you may want to perform some additional edits on. The class, rule and monitor names will contain GUIDs, which you may want to replace with friendlier names, and you may want to also leverage off some more advanced functionality that you may have created using the VSAE and Kevin Holman’s fragments, for example.

Happy Authoring!

Wednesday, July 4, 2018

Enabling OS monitoring for Agentless monitored Windows Servers in #OpsMgr

One of my customers is unable to deploy the Operations Manager agent to a portion of their estate for various complex reasons, and are exploring some alternative ways of using Operations Manager to manage both Windows and Linux, one of those options being using Agentless monitoring. Windows Operating System monitoring is disabled by default for the Agentless monitored servers though.

In order to enable monitoring, you will need to enable the discovery for the Windows Server Computers class for the Agentless Computers group. To do this, navigate to Discoveries, and scope your view to Windows Server. You should see only one discovery here (Discover Windows Server Computers)

clip_image001

Create an override for a Group:

clip_image002

Search for the Agentless Managed Computer Group (do not select the Agentless Exception Monitoring group)

clip_image003

Check the Enabled box (even if it is enabled by default)

clip_image004

Save into the appropriate MP, and wait about 10 minutes. You should now start seeing monitors populate in the health explorer.

clip_image005

And you will see some perf counters available in the Performance view

clip_image006

A few notes:

  • Your default action account will still need local admin rights on the servers monitored agentless in order to read the event logs, performance counters, etc
  • Ping needs to be available between the gateway/management server and the agentless monitored device
  • Unlike agent-managed computers, there is no caching of data. If connectivity is lost, there will be gaps in reporting data. You also won’t receive heartbeat failures, only failed to ping alerts.
  • There is a recommended limit of 60 agentless monitored servers per management group, and no more than 10 per management server/gateway – so this can’t be used to monitor the entire estate, but can help create some visibility.

Friday, March 16, 2018

Let’s talk about #opsmgr Sizing, baby

The OpsMgr 2012 Sizing Helper is an interactive document designed to assist you with planning & sizing deployments of System Center 2012 Operations Manager. It helps you plan the correct amount of infrastructure needed for a new OpsMgr 2012 deployment, removing the uncertainties in making IT hardware purchases and optimizes cost. A typical recommendation will include minimum hardware specification for each server role, topology diagram and storage requirement.

[source]

IMG_20180316_151010_BokehOne of the issues I have encountered most frequently in the years I have supported customers running System Center Operations Manager is poor performance of their environment because it is under-resourced and over-utilized. And when we have the discussion about resources, I am often told that they used the Sizing Helper tool, so it must be right.

The Sizing Helper really is just that, a helper, a guide. A starting point. It is a guide on the basic hardware to provision for the number of agents you intend on monitoring, without any consideration for additional workloads, number of management packs or environment health. And this is something that is so environment specific, that the calculator will never be able to cater for each scenario.


The Management Pack Lifecycle recommends that you review each management pack you want to import in a test environment before you implement into production exactly for this reason. Your process should look like this:

  1. Download new management pack (or updated version of existing management pack) as well as the management pack documentation.
  2. Read the management pack documentation.
  3. Read it again, this time taking note of the requirements for this management pack, such as run-as account requirements, additional resource requirements, configurations to be made on the target, etc.
  4. Baseline the performance and capacity of your test environment.
  5. Take note of the current workflows on the management servers. You can use Dirk Brinkman’s script to help, if unsure.
  6. Import the management pack into your test environment and configure as per the management pack documentation.
  7. Compare the performance and workflows of the environment before and after import. You will also need to take into consideration how many agents you are monitoring in your test environment vs in production, and identify the load per agent/monitored entity.

I would also recommend you run the management pack in your test environment for at least a two week period, and perform stress testing on the monitored entities. Generate failures, generate event storms, put it through its paces as much as possible to ensure you can gauge, as much as possible, what the impact on your production environment will be.

You will also be able to tune the noise out quite quickly this way, and ensure that, when you import the management pack into production, you are only receiving the alerts you want, you are not over-collecting performance counters you will never use, and you can keep your SCOM environment healthy.

It may sound like a long process, but it will save you a lot of time in the long run.

Related Posts with Thumbnails