Wednesday, April 10, 2024

PowerShell: Folder Diff

With a new git repo, I after push the source code and then re-pull to a new local repo. To verify the .gitignore is correct, I diff the original folder containing the source code against the git clone just created. The following PowerShell is what I use to dif the folders:

    [string] $folderSource,
    [string] $folderDestination

Set-StrictMode -Version 3.0
$ErrorActionPreference = 'Stop'
Set-PSDebug -Off
# Set-PSDebug -Trace 1

[int] $exitCodeSuccess = 0

[int] $exitCodeError = 1

[int] $exitCode = $exitCodeSuccess

function Get-RelativeFilePaths {
        [string] $folderPath

    [string] $resolvedFolderPath = 
        (Resolve-Path -Path $folderPath -ErrorAction Stop).Path + '\'

    return (
        Get-ChildItem `
            -Path $folderPath `
            -Recurse `
            -File `
            -ErrorAction Stop).
                Replace($resolvedFolderPath, '')

try {
    [string[]] $sourceFiles = `
                 Get-RelativeFilePaths $folderSource
    [string[]] $destinationFiles = `
                 Get-RelativeFilePaths $folderDestination
    Compare-Object `
        -ReferenceObject $sourceFiles `
        -DifferenceObject $destinationFiles `
        -ErrorAction Stop    

catch {
    [System.Management.Automation.ErrorRecord] $errorRecord = $PSItem

    $exitCode = $exitCodeError
    Write-Host $errorRecord
    Write-Host `
        "Exception Message: $($errorRecord.Exception.Message), " + `
        "Stacktrace: $($errorRecord.Exception.StackTrace)"

return $exitCode

The launch.json is as followings when running from Visual Studio Code:

    "version": "0.2.0",
    "configurations": [
            "name": "Compare-Folders",
            "type": "PowerShell",
            "request": "launch",
            "script": "${workspaceRoot}/Compare-Folders.ps1",
            "cwd": "${workspaceRoot}",
            "args": [
                <source folder>,
                <destination folder>

Wednesday, April 3, 2024

Visual Studio Code: Error Generation Bicep Template from Existing Azure Resource

The latest version of Microsoft's Bicep Extension (v0.26.54) for Visual Studio Code has a new behavior that causes an error message to be generated under certain circumstances. The Bicep Extension was covered in a blog post eighteen months ago (November 26, 2022: Azure: Generate a Bicep Template from an Existing Azure Resource using Visual Studio Code) and the manifesting of the following error message is a newish behavior in the extension:

Caught exception fetching resource: The ChainedTokenCredential failed to retrieve a token from the included credentials. - Please run 'az login' to set up account - Please run 'Connect-AzAccount' to set up account.

The error message above is generated while attempting to use the command, Bicep: Insert Resource (F1 displays the command palette or CTRL-SHIFT-P on Windows or CMD-SHIFT-P on Mac):

The error message is generated when a valid resource ID is entered for the Bicep: Insert Resource command and OK is clicked on in order to extract the Bicep template for an existing resource. The error message (see below) suggests a solution, logging in using Azure CLI (az login) or PowerShell (Connect-AzAccount):

At the time the error is generated, the Visual Studio Code Azure Extension is logged into a valid Azure Subscription:

Visual Studio Code's accounts show that the correct Microsoft account is logged in:

The solution is to log in to Azure via the Visual Studio Code Terminal window (displayed using CTRL-`):

There Terminal window above is PowerShell, so Azure will be logged into using Connect-AzAccount (note: Azure CLI's az login could also have been used):

Once the Terminal window is used to log in to, Azure the Bicep: Insert Resource command will run successfully meaning a Bicep template will be extracted from an existing Azure resource.

Tuesday, April 2, 2024

Visual Studio Code: Azure Extension Pack error "Expected value to be neither null nor undefined"

This post presents a solution to the following error displayed in Visual Studio Code when using the Azure Tools Extension Pack:

Internal error: Expected value to be neither null nor undefined: resourceGroup for grouping item

The Azure Tools Extension Pack (Azure Tools) is critical to Azure developers who use Visual Studio Code (see the A below):

When signed in to a subscription, the Azure Tools Extension displays Azure resources by default, grouped by resource group. For some subscriptions (not all) the following error is displayed by Visual Studio Code when group by is set to "Group by Resource Group":

The fix to this issue is to click on the group by icon:

This displays the following context menu:

From the context menu, select the menu item "Group by Resource Type." This has change causes the error to no longer be raised. For subscriptions experiencing this error, there seems to be no way to select the Group By menu item "Group by Resource Group" without generating an error.

Sunday, March 24, 2024

Azure DevOps: Requiring Pull Requests to be Associated with a Work Item

Whether following Git Flow, GitHub Flow, GitLab Flow or Trunk-based Development certain policies are standard to source code best practices. For example. each Pull Request (PR) must be associated with a single task/story/epic (a linked work item). This post discusses Azure DevOps support for this feature.

For a given ADO Git repo, branch policies (such a requiring a PR to be linked to a work item) are set per-branch. There is no way to assign such policies to multiple branches. In order to set a branch's policies, navigate to a Repo's branches tab:

For a branch whose policy is to be set, click on the three dots to show the context menu shown below: 

From the context menu, select Branch policies. From the Branch Policies tab click on Check for linked work items:

Under Check for linked work items, insure the Required radio button is selected:

Git Branch Strategies

Requiring "check for linked work items" is set per-branch. Which branches this will be set for depends on the git branch strategy adopted.

Git Branch Strategy: Git Flow

For the Git Flow branching strategy the following branches would require a linked work item:
  • Main/Master
  • Develop
  • Feature
  • Release
  • Hotfix

Git Branch Strategy: GitHub Flow

For the GitHub Flow branching strategy the following branches would require a linked work item:
  • Main/Master
  • Feature

Git Branch Strategy: GitLab Flow

For the GitLab Flow branching strategy the following branches would require a linked work item:
  • Main/Master
  • Feature
  • Pre-Production
  • Production

Git Branch Strategy: Trunk-based Development

For the Trunk-based Development branching strategy the following branches would require a linked work item:
  • Main/Master
  • Trunk
  • Feature

Friday, March 22, 2024

Azure/PowerShell: Geolocating Storage Account White Listed IP Addresses

On a project, we had provided access to to an Azure Storage account by adding permitted IP addresses to the firewall (white listed IPs). These settings can be found via by navigating to the storage account and selecting Network under "Security + networking":

I was tasked with writing a script to list all the white listed IP addresses and display there geographic location. The returns (for free) the geo data associated with an IP address. This service is free for no-commercial use:

The PowerShell script to return this information takes two required parameters:

  • $resourceGroupName: resource group name associated with storage account
  • $storageAccountName: storage account name whose white listed IPs will be returned

The script in its entirety is as follows:

    [string] $resourceGroupName,
    [string] $storageAccountName

Set-StrictMode -Version 3.0
$ErrorActionPreference = 'Stop'
Set-PSDebug -Off
#Set-PSDebug -Trace 1

# FYI: Import-Module Az.Storage -ErrorAction Stop

[int] $exitCodeSuccess = 0

[int] $exitCodeError = 1

[int] $exitCode = $exitCodeSuccess

try {
    Connect-AzAccount -ErrorAction Stop | Out-Null

    [Microsoft.Azure.Commands.Management.Storage.Models.PSNetworkRuleSet] $networkRuleSet = 
        Get-AzStorageAccountNetworkRuleSet `
            -ResourceGroupName $resourceGroupName `
            -Name $storageAccountName `
            -ErrorAction Stop
    [Microsoft.Azure.Commands.Management.Storage.Models.PSIpRule[]] $ipRules = 

    $ipRules | ForEach-Object { 
        [string] $ip = $_.IPAddressOrRange
        [PSCustomObject] $response = Invoke-RestMethod `
                        -Uri "$ip" `
                        -ErrorAction Stop

        Write-Output ( `
            "$ip, $($response.isp), $($, " +
            "$($response.regionName), $($, " +
    } -ErrorAction Stop

catch {
  [System.Management.Automation.ErrorRecord] $errorRecord = $PSItem

  $exitCode = $exitCodeError
  Write-Host $errorRecord
  Write-Host "Exception Message: " + ` 
    $($errorRecord.Exception.Message)," +` 
    Stacktrace: $($errorRecord.Exception.StackTrace)"

return $exitCode

The Get-AzStorageAccountNetworkRuleSet cmdlet is described as follows (see Get-AzStorageAccountNetworkRuleSet):

IP Ranges versus Individual IP Addresses

The following $ipRules variable returns IP addresses and range of IP address:

[Microsoft.Azure.Commands.Management.Storage.Models.PSNetworkRuleSet] $networkRuleSet = 
        Get-AzStorageAccountNetworkRuleSet `
            -ResourceGroupName $resourceGroupName `
            -Name $storageAccountName `
            -ErrorAction Stop
    [Microsoft.Azure.Commands.Management.Storage.Models.PSIpRule[]] $ipRules = 

In the code sample above it is assumed $ipRules contains only IP addresses otherwise the following code where the service is invoked would not work as the expected parameter is an IP address not an IP address range:

        [string] $ip = $_.IPAddressOrRange
        [PSCustomObject] $response = Invoke-RestMethod `
                        -Uri "$ip" `
                        -ErrorAction Stop

Wednesday, February 7, 2024

Azure: Virtual Machines that support WSL

Azure Nested Virtualization Capable VMs

In order to run WSL and potentially Docker on an Azure Virtual Machine (VM) a VM's SKU Family must be hyper-threaded and be capable of running nested virtualization. The following link from Microsoft Learning, Azure compute unit (ACU), demarcates in a table all nested virtualization capable VMs by three asterisks: 

Azure WSL Capable VM Types

The Azure VM that are capable of supported WSL (a.k.a. capable of running nested virtualization) are as follow from Microsoft's article, Azure compute unit (ACU), provided that the vCPU: Core column contains three asterisks:

The following include several other VM types on which WSL can be installed:

Azure Subscription may not include WSL Capable VM Types

Be aware the not ever Microsoft subscription supports such Virtual Machines. For example the subscription that comes in a Visual Studio Subscription (the $150 free monthly Azure credit) might contain no virtual machines types that are nested virtualization capable.

Saturday, February 3, 2024

Docker: Fails on Windows Immediately After Install

Recently I installed Docker on a Windows 11 Pro laptop. Immediately after install I attempted to run a Docker image containing PowerShell. This image was run by invoking the following command from a PowerShell console:

docker run -it

The error returned by this command was:

docker: error during connect: this error may indicate that the docker daemon is not running: Post "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/create": open //./pipe/docker_engine: The system cannot find the file specified.
See 'docker run --help'.

The error above is clear "this error may indicate that the docker daemon is not running" which means I had forgotten to run Docker Desktop. Windows 11 Pro and Windows 10 Pro require Docket Desktop in order to run the Docker Engine.

I started Docker Desktop, reran the docker run command, and received a subsequent error:

docker: request returned Internal Server Error for API route and version http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/create, check if the server supports the requested API version.
See 'docker run --help'.

As engineers we all have our Home Simpson, "Duh" moments. When I actually looked at Docker Desktop, I saw the following:

I had failed to Accept the terms of service screen so the Docker Engine was not started. The lesson learned: After installing Docker Desktop on on Windows 10 Pro or Windows 11 Pro, run Docker Desktop and click on the Accept button.

I am very explicit about using Windows 11 Pro or previously Windows 10 Pro. The reason for this is that Windows 10 and Windows 11 can run Linux containers only with Docker installed. In order to run Windows containers, Windows 10 Pro or Windows 11 Pro is required.

In a previous blog, I noted a change I'd made (indirectly) to Docker's installation instructions for Docker Desktop on Windows, Install Docker Desktop on Windows. I created a PR in the documentation noting the importance of Windows Pro and the Docker documentation team added the following warning to their installation guide

Friday, January 26, 2024

Azure: Open Source Contributions to Azurite emulator for local Azure Storage

Microsoft has begun using AI to write its documentation. One such AI written article is Use the Azurite emulator for local Azure Storage development which contains text like the following:

The use of the word either in text means there should be two options when the AI written text provided three options (npm, Docker Hub or GitHub).

When contributing to a documentation that requires significant rewrite I often make a small change to see if there are active reviewers who can quickly approve the changes. The paragraph above from the Azurite documentation contains my first trial change specifically the text "Node Package Manager (npm)" which as of yesterday (January 25, 2024) was simply "Node Package Manager":

This page has a very active team approving PRs because my first trivial change was committed in under a day. Here is the email notifying me that the PR for the above change was merged causing the change to appear immediately on the Azurite documentation web page:

Wednesday, January 17, 2024

Azure: Naming Resources

Before creating an resource in Azure, a naming standard should be followed such as this one proposed by Microsoft, Define your naming convention. An excerpt from the aforementioned article is as follows which demonstrates one of the most commonly adopted naming standards with respect to Azure resources: 

The above naming strategy uses a prefix before each resource name that serves to identify the type of resource. Microsoft provides a comprehensive list of standard resource prefixes in in this document Abbreviation examples for Azure resources. The prefixes for some of the most communly used Azure resource types is as follows:

  • appi: Application Insights
  • asp: App Service Plan
  • cosmos: Azure Cosmos DB database (this is the name Azure uses which includes DB and database)
  • kv: Key Vault
  • logic: Logic App
  • sbq: Service Bus Queue
  • st: Storage Account

From the previous document the standard prefix for an Azure Function is func. An example of an Azure Function name created using Microsoft's naming convention is func-sometestfeature-dev-southcentralus-01:

Tuesday, January 16, 2024

Windows Services: Fix Online Documentation ServiceController

The following links related to the ServiceController class's documentation were likely written prior to 2002 (.NET 1.0 was released in January 2002) and the code samples in the documentation contain some C# examples that are functionally correct but are not written using best practices with respect to C# coding:

The following code was found in the above links where the code demarcated in boldface is not written using best practices:

The Status property above is used in the Console.WriteLine is an enumeration, ServiceControllerStatus, and there is no need to specify ToString().

The Equal method is used above to compare the Status property with a ServiceControllerService value. Best practice would be to use operator==.

The same code as above is show below without the superfluous ToString and using operator== instead other Equals method:

The above change was approved January 16, 2024 and merged into the main documentation branch:

Azure/PowerShell: Retrieving Azure Regions using Get-AzLocation

A standard for naming Azure resources has been proposed by Microsoft, Define your naming convention. The naming standard specifies that each Azure resource name contains a:

  • Resource type abbreviation
  • Workload/application
  • Environment (.e.g. dev, QA, stage, prod, etc.)
  • Azure region
  • Instance number

A programmatic means to retrieve the Azure regions used by resource name is provided by the Get-AzLocation PowerShell cmdlet. The documentation for Get-AzLocation (Get-AzLocation) describes the cmdlet's functionality as follows:

Get-AzLocation can be invoked as follows to return a list of Azure locations (regions) that can be used in resource names:


Get-AzLocation | Select-Object -ExpandProperty Location | Sort-Object

Azure regions are not static meaning regions are added and in theory could be removed. As of January 16, 2024 the Get-AzLocation script shown above returns the following list of Azure regions:


Sunday, January 14, 2024

Docker: Identify Linux/Windows Container Support Requirements (Open Source Contribution)

The Docker documentation, Install Docker Desktop on Windows, specified at the bottom of the instructions the operating systems requirements to run Windows containers from Docker. I modified the page but at the same time the Docker documentation team split the system requirements into multiple pages (per-operating system) and made the same basic modification I had proposed to the documentation.

My PR was acknowledge as part of the changes:

The warning entitled, Important, at the bottom of this documentation was fundamentally the change proposed in close proximity to the specific version requirements:

Saturday, January 6, 2024

Git: Adding a Submodule to a Repo

Adding a Submodule

I was tasked with adding Opkg utilties to an existing git repo so these utilities could be invoke as part of the build pipeline. The Opkg project is found at:

The logical way to add the contents of a Opkg to to an exising git repo is byusing a git submodule. A submodule can be added to a repo by navigating to the folder in which the local repo resides: 

cd my-repo-folder

From inside the local repo's folder invoke:

git submodule add

The command above creates a clone of the repo in folder opkg-utils and creates a .gitmodules file at the local repo's root. The .gitmodules file create is as follows:

[submodule "opkg-utils"]
        path = opkg-utils
        url =

The following command shows the current changes related to any submodules:

git diff --cached --submodule

The output from the above code is as follows:

diff --git a/.gitmodules b/.gitmodules
new file mode 100644
index 0000000..8205de2
--- /dev/null
+++ b/.gitmodules
@@ -0,0 +1,3 @@
+[submodule "opkg-utils"]
+       path = opkg-utils
+       url =
Submodule opkg-utils 0000000...1f5c57b (new submodule)

The  value 1f5c57b above corresponds to the SHA code of the latest commit the the opkg-utils repo:


To perform a git add and git commit for the local repo, the following is invoked

git commit -am 'Task-123: Add Opkg as submodule'

The output from the above add/commit as follows:
warning: in the working copy of '.gitmodules', LF will be replaced by CRLF the next time Git touches it
[master 91e677e] Task-123: Add Opkg as submodule
 2 files changed, 4 insertions(+)
 create mode 100644 .gitmodules
 create mode 160000 opkg-utils

The mode, 160000, indicates opkg-utils is a submodule meaning in the repo, opkg-utils is a directory and not a sub directory.

The submodule can be committed to origin (the remote git repo) using the following command:

git push origin master

Note above that the branch name is master. It simply that I signed up for Azure DevOps over a decade above before there was a main branch and before it was called ADO.

In Azure DevOPs the opkg-utils folder looks as follows:

The SHA of the opkg-utils commit is contained in the opkg-util folder meaning the .submodules file is not where the SHA code is stored.

Cloning to Include the Submodule

To clone a repo add the --recurse-submodules parameter tot he standard git clone for the repo:
git clone --recurse-submodules https://<repo url here>

Forgetting to Clone with --recurse-submodules

If git clone is performed without --recurse-submodules then see the following to show how to add the the submodule or submodules to an already cloned git repo: Git: Get Submodules from an already Cloned Repo.