Wednesday, February 13, 2019

Azure Log Analytics and Grafana - Data Source creation failed

I’ve been playing around with connecting Grafana to Azure Monitor and Azure Log Analytics for dashboarding purposes, and ran into an issue with the creation of the data source. It succeeded on creation for Azure Monitor, but returned an error relating to authentication for the Log Analytics portion that looks like this:

1. Successfully queried the Azure Monitor service. 2. Azure Log Analytics: Forbidden: InsufficientAccessError. The provided credentials have insufficient access to perform the requested operation

I made sure I created the service principal and gave it access to the Log Analytics workspace, and even tried elevated permissions, but none of this worked.

I had created a Grafana instance from the Azure marketplace, which created a virtual machine with Ubuntu installed, and Grafana on top of that. It installed Grafana version 5.1.0, and I found a comment somewhere in my searches that an upgrade of Grafana to the latest version resolves this issue.

Following the upgrade instructions did not work, as apt-get returned version is current.

Instead, I had to follow the installation instructions, which meant downloading the binaries for the latest version, and then installing that. Once the upgrade completed and I restarted the services, the data source created without error, and I was able to create dashboards using standard Kusto queries.

Monday, August 6, 2018

#Azure and #LogAnalytics: Audit current workspaces

In my last post, I mentioned that I have been helping a customer consolidate their workspaces and shared a script that can be used to bulk move workspaces from one subscription to another. Before you can move the workspaces, you will need to know what solutions have been added to each workspace, as some solutions cannot be moved. These solutions will need to be removed before moving the workspace, and then added back afterwards.

This script in its current form will:

  • Connect to the subscription specified
  • Retrieve all the workspaces in the subscription
  • Retrieve the enabled solutions in the workspace
  • Write a log file containing the subscription name, workspace name, solution name, along with the pricing tier of the workspace.

Disclaimer: This script has been tested in a lab environment only. It is highly recommended that you test the script in a lab environment and adapt for your environment before using in production.

Update the bits in yellow as appropriate for your environment.

###########################################################################

#

# Get-LogAnalytics-Workspaces.ps1

#

#

# This script will retrieve all the workspaces in a specific subscription along with solution info

#

#

###########################################################################

# Variables

$Subscription = "Name of subscription you want to query"

# We are going to create a log file for this process.

# These variables determine the log file name and location

$ReportDate = Get-Date -format "yyyy-M-dd"

$rnd = Get-Random -minimum 1 -maximum 1000

$ReportLocation = "C:\Temp"

$ReportName = "$ReportDate-$rnd-Azure-LogAnalyticsWorkspace-Solutions-$Subscription.csv"

$SavetoFile = "$ReportLocation\$ReportName"

# Create the folder if it doesn't exist

If (!(Test-Path $ReportLocation)){

New-Item -ItemType directory -Path $ReportLocation

}

# Delete any previous versions of the report for in case

If (Test-Path $SavetoFile){

Remove-Item $SavetoFile

}

# Open the content

$body = @()

$body += "Subscription,WorkspaceName,ResourceGroup,PricingTier,SolutionsEnabled"

# Connect to the subscription

Set-AzureRmContext -Subscription $Subscription

# Retrieve all the work spaces

$workspaces = Get-AzureRmOperationalInsightsWorkspace

# Step through the workspaces

foreach ($workspace in $workspaces)

{

$workspacename = $workspace.Name

$wsrg = $workspace.ResourceGroupName

$sku = $workspace.Sku

write-host $workspacename

# Retrieve the enabled solutions for the workspace

$solutions = (Get-AzureRmOperationalInsightsIntelligencePacks -ResourceGroupName $wsrg -WorkspaceName $workspacename).Where({$_.Enabled -eq $true})

foreach ($solution in $solutions)

{

$sname = $solution.Name

write-host $sname

$body += "$Subscription,$workspacename,$wsrg,$sku,$sname"

}

}

# Write the log file

$body >> $SavetoFile

Friday, August 3, 2018

#Azure and #LogAnalytics: Move workspaces in bulk

I had a request from a customer recently to help them move all their Log Analytics workspaces into a new subscription, so wrote a quick script for this purpose.

This script in its current form will:

  • Connect to the specified subscription ($OldSub)
  • Get all the workspaces in that subscription
  • Move the workspaces to the specified subscription ($NewSub)
  • Prompt for confirmation for each workspace and run to completion for the workspace before starting with the next in the list
  • Write a log file of each workspace processed

Disclaimer: This script has been tested in a lab environment only. It is highly recommended that you test the script in a lab environment and adapt for your environment before using in production.

ETA: I had a reminder that I forgot to mention this. Some solutions cannot be moved with the workspace, and will need to be removed before the workspace can be moved.


Update the bits in yellow as appropriate for your environment.

############################################################################
#
#   Move-LogAnalytics-Worksspaces.ps1
#
#   Script by: Vanessa Bruwer
#
#
#   This script will move all the workspaces in a specific subscription
#   and assumes all workspaces are managed by resource manager
#
#
############################################################################

# Variables

$OldSub = "Name of the subscription currently hosting workspaces"
$NewSub = "Name of the subscription the workspaces need to be moved to"

$NewRG = "Name of the resource group in the new sub to manage the workspaces"


# We are going to create a log file for this process.
# These variables determine the log file name and location

$ReportDate = Get-Date -format "yyyy-M-dd"

$rnd = Get-Random -minimum 1 -maximum 1000
$ReportLocation = "C:\Temp"
$ReportName = "$ReportDate-$rnd-Azure-LogAnalyticsWorkspaceMoveLog.csv"
$SavetoFile = "$ReportLocation\$ReportName"


# Create the folder if it doesn't exist

If (!(Test-Path $ReportLocation)){
     New-Item -ItemType directory -Path $ReportLocation
}


# Open the content
$body = @()

$body += "WorkspaceName,WorkspaceID,OldSub,NewSub"

# Now the script work really starts


# Connect to the old subscription

Set-AzureRmContext -Subscription $OldSub


# Get the ID for the New Subscription

$NewSubID = (Get-AzureRmSubscription -SubscriptionName $NewSub).Id


# list all the workspaces in a subscription
$Workspaces = Get-AzureRmResource -ResourceType Microsoft.OperationalInsights/workspaces


foreach ($Workspace in $Workspaces)
{

$WSID = $Workspace.ResourceId
$WSName = $Workspace.Name


Move-AzureRmResource -DestinationSubscriptionID $NewSubID -DestinationResourceGroupName $NewRG -ResourceId $WSID

#write-host "$WSName has been moved to $NewSub"

$body += "$WSName,$WSID,$OldSub,$NewSub"


}

$body >> $SavetoFile

Monday, July 30, 2018

#OpsMgr: SNMP Management Pack Generator

One of the biggest complaints I’ve heard from customers over the years is that other monitoring tools allow one to import MIB files, and it is really hard to author management packs for SNMP-based devices, and they were not completely wrong.

The product group listened, and when System Center 2016 was released, the PG gave us a lovely new tool, the System Center Operations Manager - Extensible Network Monitoring Management Pack Generator tool, which is a free download. And, while it is old news, really, I’ve seen a number of posts announcing it, and a bit of talk about how to use the command line utility to convert an MP, but nothing talking about how to actually use the GUI tool to import a MIB and then create an MP, and so I thought I would do a quick dive into this wonderful tool.

After installation, you can find the GUI tool (SNMP_MP_Generator) in the install directory. On first launch, you will be asked to create an MP and provide a save location.

image

You can also import MIB files at this time – or do that later, as you need.

You will now have a blank management pack, as well as access to the MIB browser:

image


While the MIB browser is a nice feature, the main purpose of this tool is to help you build MPs.

You want to start by adding a device. If you right click on Project, you can choose your device type. In this example, I am going to build a management pack for monitoring Linux servers using SNMP for those scenarios where you cannot install the agent, so I am going to choose Custom Network Device.

image

First, we probably want to load the MIBs for the device we want to monitor, so it is easier to find the OIDs. Click on the Down Arrow button in the ribbon to locate the MIB files you want to import.

image

You can select a few MIBs at the same time, if needs be.


You can now populate the device details form using the MIB browser.

image

  1. Browse the MIB tree for the device/component
  2. Find the OID
  3. Copy and Paste the OID into the device details.

Populate the other details as required and Save.

You will now be able to add performance rules for this device, or even add child components.

image


Build it out as much as needed

image


Once you are ready to test your MP in your lab, you need to generate the MP from this project.

Note: the file you created and saved using this tool is a project file, and is not ready to be imported into Operations Manager. You have to Generate the MP.

To generate the MP, choose the icon that looks like an export option.

image

The management pack file will be saved into the same location as the project file you created, but without the PROJ denotation in the name.

This will produce an unsealed management pack, which you may want to perform some additional edits on. The class, rule and monitor names will contain GUIDs, which you may want to replace with friendlier names, and you may want to also leverage off some more advanced functionality that you may have created using the VSAE and Kevin Holman’s fragments, for example.

Happy Authoring!

Wednesday, July 4, 2018

Enabling OS monitoring for Agentless monitored Windows Servers in #OpsMgr

One of my customers is unable to deploy the Operations Manager agent to a portion of their estate for various complex reasons, and are exploring some alternative ways of using Operations Manager to manage both Windows and Linux, one of those options being using Agentless monitoring. Windows Operating System monitoring is disabled by default for the Agentless monitored servers though.

In order to enable monitoring, you will need to enable the discovery for the Windows Server Computers class for the Agentless Computers group. To do this, navigate to Discoveries, and scope your view to Windows Server. You should see only one discovery here (Discover Windows Server Computers)

clip_image001

Create an override for a Group:

clip_image002

Search for the Agentless Managed Computer Group (do not select the Agentless Exception Monitoring group)

clip_image003

Check the Enabled box (even if it is enabled by default)

clip_image004

Save into the appropriate MP, and wait about 10 minutes. You should now start seeing monitors populate in the health explorer.

clip_image005

And you will see some perf counters available in the Performance view

clip_image006

A few notes:

  • Your default action account will still need local admin rights on the servers monitored agentless in order to read the event logs, performance counters, etc
  • Ping needs to be available between the gateway/management server and the agentless monitored device
  • Unlike agent-managed computers, there is no caching of data. If connectivity is lost, there will be gaps in reporting data. You also won’t receive heartbeat failures, only failed to ping alerts.
  • There is a recommended limit of 60 agentless monitored servers per management group, and no more than 10 per management server/gateway – so this can’t be used to monitor the entire estate, but can help create some visibility.
Related Posts with Thumbnails