Thursday, 27 March 2025

Azure MFA NPS Extension Configuration Failure Due to Microsoft Graph PowerShell Module Conflicts

Introduction

When configuring the Azure MFA NPS Extension using the official script (AzureMfaNpsExtnConfigSetup.ps1), administrators may encounter a failure during the update of the Azure Active Directory service principal. The script attempts to push certificate data using the Microsoft Graph PowerShell SDK, and the process fails with the following error:

Update-MgServicePrincipal : Cannot convert the literal '<cert_blob>' to the expected type 'Edm.Binary'.
Status: 400 (BadRequest)

This issue typically occurs due to multiple versions or conflicting installations of the Microsoft Graph PowerShell modules, which can lead to serialization or data formatting errors during API operations.

This KB provides a tested remediation process by removing all existing Graph modules and reinstalling the required components cleanly.

Instructions

✅ Step 1 – Manually Uninstall All Microsoft.Graph Modules

Open PowerShell as Administrator and run the following script to uninstall all installed versions of any Microsoft.Graph modules:

$Modules = Get-Module Microsoft.Graph* -ListAvailable | Where {$_.Name -ne "Microsoft.Graph.Authentication"} | Select-Object Name -Unique
Foreach ($Module in $Modules)
{
    $ModuleName = $Module.Name
    $Versions = Get-Module $ModuleName -ListAvailable
    Foreach ($Version in $Versions)
    {
        $ModuleVersion = $Version.Version
        Write-Host "Uninstall-Module $ModuleName $ModuleVersion"
        Uninstall-Module $ModuleName -RequiredVersion $ModuleVersion
    }
}

# Uninstall Microsoft.Graph.Authentication
$ModuleName = "Microsoft.Graph.Authentication"
$Versions = Get-Module $ModuleName -ListAvailable
Foreach ($Version in $Versions)
{
    $ModuleVersion = $Version.Version
    Write-Host "Uninstall-Module $ModuleName $ModuleVersion"
    Uninstall-Module $ModuleName -RequiredVersion $ModuleVersion
}

After the script completes, manually rerun it until this command returns no results:

Get-InstalledModule | Where-Object { $_.Name -like "Microsoft.Graph*" }

📝 Note: Some modules may be reloaded or may not uninstall cleanly on the first attempt, especially if there are versioning or dependency overlaps. Repeating the uninstall step ensures a clean removal.

✅ Step 2 – Install Required Microsoft Graph Modules

Once all previous versions are removed, install only the latest required modules:

Install-Module Microsoft.Graph

✅ Step 3 – Rerun the Configuration Script

With a clean module set installed, rerun the configuration script:

C:\Program Files\Microsoft\AzureMfa\Config\AzureMfaNpsExtnConfigSetup.ps1

The script should now complete successfully, allowing Azure MFA to integrate with the NPS service and Remote Desktop Gateway.

Additional Notes

This issue has been independently reported in the sysadmin community and is reproducible in environments where Microsoft Graph modules are upgraded over time without cleanup.

No interference was found from Microsoft Defender in typical environments, though exclusions may still be useful in some configurations.

Tuesday, 11 March 2025

Transition from LBFO to SET in Windows Server 2022 for Hyper-V Networking

Load Balancing and Failover (LBFO) refers to the traditional NIC Teaming feature in Windows Server that allows multiple physical network adapters to act as one for load distribution and redundancy. Switch Embedded Teaming (SET) is a newer technology introduced in Windows Server 2016 that integrates NIC teaming directly into the Hyper-V virtual switch​ (Link). The key difference is that LBFO is an independent teaming mechanism at the host level, whereas SET is built into the Hyper-V switch itself (hence “switch embedded”). Microsoft has shifted toward SET for Hyper-V environments because it simplifies the stack and enables advanced capabilities (like RDMA and faster VM networking) not possible with LBFO. Starting with Windows Server 2022, a Hyper-V virtual switch cannot be bound to an LBFO team – it must use a SET team​(Link). (In other words, LBFO is deprecated for Hyper-V networking.) This change was made to improve performance and support new features. For example, SET allows teaming on RDMA-capable NICs and even guest RDMA, as well as features like Dynamic Virtual Machine Queue (Dynamic VMMQ)​(Link), which were not supported with the older LBFO approach. In summary, Microsoft now recommends using SET for Hyper-V networking because it provides better integration with the hypervisor and future-proofs the environment.

Differences between LBFO and SET: LBFO (the older NIC Teaming) offered more flexibility in some ways – it could team a larger number of NICs (in Windows Server 2019/2022, up to 32 adapters could be in one LBFO team) and had no strict requirement that NICs be identical. SET, on the other hand, supports a maximum of 8 physical NICs in a team​(Link) and requires the adapters to be symmetric (same make, model, speed, and configuration) for best results​(Link). Another difference is in teaming modes: LBFO supports various teaming modes including Switch Dependent options (like LACP or static link aggregation) and Switch Independent mode. SET only supports Switch Independent mode, with the Hyper-V switch handling the distribution of traffic​(Link). This means features like LACP are not available with SET, but this simplification reduces complexity in Hyper-V scenarios. In practice, LBFO allowed teaming across mixed adapters and multiple switches, whereas SET requires a uniform set of NICs and is designed to work with the Hyper-V virtual switch exclusively. Microsoft’s decision to shift to SET for Hyper-V reflects the aim to streamline networking for virtualization and enable high-performance features (e.g. SET is the only supported teaming method for new software-defined networking scenarios and Azure Stack HCI). While LBFO was mature and stable, it will not see new improvements for Hyper-V usage (it remains supported only for non-virtualization scenarios). The move to SET ensures Hyper-V networks can leverage modern networking enhancements that the older LBFO teaming could not support.

Workload Support in Windows Server 2022

It’s important to distinguish which scenarios still support LBFO versus those that require SET in Windows Server 2022:

  • Non-Hyper-V Workloads (Physical/Standalone Roles): Traditional LBFO NIC Teaming is still fully supported for non-Hyper-V scenarios in Windows Server 2022​ (Link). This means if you have a file server, SQL server, or any standalone server role that benefits from NIC teaming for higher availability or throughput (and not using a Hyper-V virtual switch), you can continue to use LBFO as before. The LBFO management UI and PowerShell (New-NetLbfoTeam, etc.) are still present for these use cases. For example, teaming NICs for a standalone cluster heartbeat network or a general active/passive failover team on a physical server is allowed with LBFO.

  • Hyper-V and Virtualization Workloads: Any scenario involving a Hyper-V virtual switch (i.e. networking for virtual machines) must use Switch Embedded Teaming (SET) in Windows Server 2022. The Hyper-V virtual switch will not bind to an LBFO team interface in this release​(Link). If you attempt to create an External vSwitch on an existing LBFO NIC team, it will be blocked (the Hyper-V Manager GUI in 2022 will throw an error, as LBFO for vSwitch is deprecated). Instead, the NIC teaming for Hyper-V needs to be done via SET as part of the vSwitch creation. This applies to Hyper-V hosts and scenarios like Software Defined Networking (SDN) or Azure Stack HCI as well. (For instance, Azure Stack HCI and other SDN solutions only support SET for host teaming, not LBFO​(Link).) In summary, any workload involving virtual machine networking in WS2022 requires SET, whereas LBFO is reserved for legacy purposes outside of virtualization.

To put it simply: Use LBFO for non-virtualized roles; use SET for Hyper-V hosts. Microsoft’s support stance reflects this – the change “only applies to Hyper-V” and LBFO remains supported for other scenarios​(Link), but if you’re running Hyper-V, the recommended and supported teaming method is SET.

SET Teaming Configuration (PowerShell Steps)

Configuring a Switch Embedded Teaming team for Hyper-V in Windows Server 2022 can be done with PowerShell. Below are step-by-step instructions to set up a SET team optimized for Hyper-V, incorporating Microsoft’s recommendations for performance tuning:

  1. Plan and Prepare – Identify the physical NICs on the Hyper-V host that will form the SET team. Ensure these NICs have identical link speed and capabilities (it’s best if they are the same model and firmware) because SET requires symmetric adapters for optimal performance​(Link). For example, if you plan to team two 10 GbE adapters for your virtual switch, verify both are from the same vendor/model and running at 10 Gbps. Also ensure no existing LBFO team is configured on them; the physical NICs should be standalone and enabled.

  2. Create the Hyper-V Switch with Embedded Teaming – Use the New-VMSwitch cmdlet to create a new external virtual switch and specify multiple NICs for the -NetAdapterName parameter. This will automatically create a SET team as part of the switch. For example:

    New-VMSwitch -Name "HyperV-TeamSwitch" -NetAdapterName "NIC1","NIC2" -AllowManagementOS $true -EnableEmbeddedTeaming $true

    In this command:

    • "HyperV-TeamSwitch" is the name of the new virtual switch (you can choose any friendly name).
    • -NetAdapterName "NIC1","NIC2" specifies the two physical network adapters to team. Replace "NIC1","NIC2" with the actual interface names of your adapters (as shown by Get-NetAdapter). You can list up to 8 NICs here for a SET team (the limit supported by SET)​(Link).
    • -AllowManagementOS $true (optional) allows the host OS to share this NIC team for management traffic. Include this if you want the Hyper-V host itself to have an IP on the teamed interface (commonly true if this team also carries host management or cluster traffic).
    • -EnableEmbeddedTeaming $true explicitly tells Hyper-V to create an embedded team. (Note: When you provide multiple NICs, PowerShell treats it as a SET team automatically. This switch is a safeguard, especially if using one NIC now and adding others later.)

    This single command replaces the old multi-step process of creating an LBFO team then attaching a vSwitch. It creates the virtual switch and teams the NICs in one step, since the teaming is integrated into the switch with SET. After running New-VMSwitch, you should have a new vSwitch visible (e.g., in Hyper-V Manager or via Get-VMSwitch) and the physical NICs will be part of the switch’s team.

  3. Optimize Load Balancing Algorithm – By default, a SET team uses the Dynamic load-balancing algorithm​(Link), which in many cases is fine. However, Microsoft documentation recommends using the Hyper-V Port algorithm for best performance on high-speed adapters (10 Gbps and above)​(Link). Depending on your workload and NIC speed, you may want to set the algorithm to HyperVPort. You can configure this using the Set-VMSwitchTeam cmdlet. For example:

    Set-VMSwitchTeam -Name "HyperV-TeamSwitch" -LoadBalancingAlgorithm HyperVPort

    This command changes the team named "HyperV-TeamSwitch" to use Hyper-V Port mode for load balancing. (The TeamingMode is implicitly SwitchIndependent for SET and cannot be changed – SET doesn’t support LACP​(Link), so no need to specify the teaming mode.) The Hyper-V Port algorithm distributes network traffic based on the virtual switch port (essentially per-VM distribution). This mode ensures each VM’s traffic is affinitized to a particular physical NIC, which can improve throughput consistency and avoid packet reordering on 10 GbE+ networks​(Link). If your host has many VMs or you are using very fast NICs, Hyper-V Port mode is often beneficial. On the other hand, Dynamic mode (the default) uses a combination of port and flow hashing to spread traffic and can yield better NIC utilization in some scenarios (it attempts to use all team members for outbound traffic). You should choose the mode based on Microsoft’s best practices: Hyper-V Port for 10 Gbps or higher NICs (and many VMs), Dynamic for general purpose or lower-speed networks​(Link)(Link). You can always adjust this setting with Set-VMSwitchTeam after initial setup.

  4. (Optional) Additional Tuning – If your deployment requires features like SR-IOV or RDMA, ensure to configure those on the vSwitch at creation. For SR-IOV, include -EnableIov $true in the New-VMSwitch command (assuming your adapters support SR-IOV). For RDMA on a SET, make sure the physical NICs support RDMA and consider using Set-VMNetworkAdapter to enable Virtual RSS (vRSS) on the VM adapters for better scalability of network processing. Also verify that features like Virtual Machine Queue (VMQ) are enabled on each physical NIC. In Windows Server 2022, these are usually enabled by default when using SET, but it’s good to double-check (you can run Get-NetAdapterVmq on each team member). The goal is to follow Microsoft’s performance tuning guidance for Hyper-V networking: enable offloads and virtualization features that are compatible with SET. (SET inherently allows Dynamic VMMQ, which automatically distributes incoming VM traffic processing across multiple CPU cores, improving performance on high throughput links​(Link).)

By following the above steps, you will have created a Hyper-V switch that uses Switch Embedded Teaming under the hood, providing both redundancy and load balancing for your Hyper-V host’s networking. This configuration is fully supported in Windows Server 2022 and aligns with Microsoft’s recommended practices for Hyper-V networking.

Hyper-V Port vs. Dynamic Load Balancing Modes in SET

When using SET, there are two load balancing algorithms available for distributing traffic across the teamed NICs: Hyper-V Port and Dynamic(Link)Understanding these modes and when to use each is important for optimizing network performance on Hyper-V hosts:

  • Hyper-V Port Mode: In this mode, each virtual switch port (which typically corresponds to a VM’s virtual network adapter, or the host’s management vNIC) is tied to a specific physical NIC in the team. All traffic for a given VM will egress through one team member interface (though inbound traffic to the host is automatically balanced by the Hyper-V switch across NICs based on VM port as well). This one-VM-to-one-NIC mapping ensures no single VM’s traffic is split across multiple physical NICs. The benefit is that it avoids potential issues with out-of-order packets and is easier on upstream switches (each VM’s MAC/IP consistently comes from one NIC, preventing “flapping”). Microsoft notes that Hyper-V Port is often the best choice for high-bandwidth networks – specifically, it is recommended for NICs 10 Gbps or faster to achieve optimal performance​(Link). In environments with many VMs, Hyper-V Port mode naturally spreads the VMs across the physical NICs (e.g. one VM’s traffic on NIC1, another on NIC2, etc.), achieving load distribution at a per-VM level. Use Hyper-V Port mode if you have very fast adapters or if you’ve observed better stability with it in your network. It’s particularly suited for scenarios where each VM can generate significant traffic on its own, as it guarantees that VM can use up to one NIC’s worth of bandwidth without interference from load-balancing algorithms.

  • Dynamic Mode: Dynamic is a more complex algorithm that combines elements of both outbound flow-based distribution and inbound port-based distribution. In practical terms, Dynamic mode will distribute outgoing traffic across the team NICs based on flow hashing, and still use Hyper-V Port for inbound traffic to ensure stability. Microsoft set Dynamic as the default load balancing algorithm for SET teams​(Link) because it aims to utilize all NICs efficiently even if a single VM is very busy. For example, if one VM’s traffic is heavy, dynamic mode can spread different TCP streams (flows) from that VM across multiple NICs, potentially exceeding the throughput of a single NIC. This can maximize aggregate bandwidth (as seen in some cases where dynamic mode allowed using the full team bandwidth). Dynamic mode is generally recommended by Microsoft for most scenarios since it provides a good balance of load distribution. However, it can be more sensitive to switch configurations – because multiple NICs may carry traffic for the same VM or IP, your physical switch (if not in a stable configuration for independent teaming) might log MAC address moving or “IP flapping” alerts. In a properly configured Switch Independent scenario (no EtherChannel/LACP on the switch ports), dynamic mode should work well. Use Dynamic when you want the team to automatically balance traffic and you have relatively moderate NIC speeds (1 GbE or 10 GbE where each VM alone might not saturate a NIC). It’s the default for a reason: in many deployments it yields the best overall throughput distribution across a team.

Best Practice: For Windows Server 2022 Hyper-V, start with the default Dynamic mode, but consider switching to Hyper-V Port mode on hosts with 10 GbE or higher NICs, or if you encounter stability issues with dynamic. Microsoft’s official guidance suggests Hyper-V Port on >=10 Gbps networks for best performance​(Link), as mentioned. Remember that in all cases with SET, the teaming mode is always Switch Independent (the physical switch does not need special configuration)​(Link), so these algorithms operate at the host level. If you change the algorithm with Set-VMSwitchTeam, the change takes effect immediately and you can monitor performance to decide which works better for your environment. Both modes will provide fault tolerance (failover to the remaining NIC if one fails), so the choice mainly impacts load balancing behavior.

Verification Steps for SET Configuration

After setting up a SET team for Hyper-V, you should verify that the configuration is correct and optimized as intended. Use the following PowerShell commands and checks to confirm a successful deployment:

  • List the Virtual Switch and Team Members: Run Get-VMSwitchTeam -Name "<SwitchName>" to retrieve information about the switch’s team. For example:

    Get-VMSwitchTeam -Name "HyperV-TeamSwitch"

    This command will display the details of the SET team associated with the switch “HyperV-TeamSwitch.” You should see output listing the Team Members (the physical NICs in the team), the TeamingMode (which will show as SwitchIndependent), and the LoadBalancingAlgorithm (Dynamic or HyperVPort, depending on what you configured)​(Link)(Link). Verify that all expected NICs are present in the team and that the load-balancing mode matches your intended setting. For instance, if you set Hyper-V Port mode for performance, ensure the output shows LoadBalancingAlgorithm : HyperVPort. If anything is incorrect, you can re-run the Set-VMSwitchTeam command to adjust settings.

  • Check the Virtual Switch Properties: You can also run Get-VMSwitch -Name "<SwitchName>" | Format-List * to see detailed properties of the virtual switch. In the detailed output, confirm that AllowManagementOS is set to True (if you intended the host to have access), and look at the NetAdapterInterfaceDescription or NetAdapterName field which should list the teamed adapters. This confirms the switch is indeed bound to the multiple physical adapters (indicating a SET team). Additionally, the SwitchType should be External (for an external vSwitch). While this cmdlet doesn’t explicitly enumerate team members as clearly as Get-VMSwitchTeam, it’s useful for checking that the switch was created with the correct parameters.

  • Validate NIC Status: Ensure all physical NICs in the team are up and functioning. Use Get-NetAdapter -Name "<NIC1>","<NIC2>" to check the link status and speed of each member NIC. Each should show Status: Up and the expected LinkSpeed (e.g., 10 Gbps). If a NIC is down or has a mismatched speed, the team may not perform optimally. All team members should be connected to the appropriate switch ports with identical configurations (no VLAN set on one and not the other, etc.). Remember that SET requires symmetric configuration on the NICs​(Link), so any discrepancy here should be fixed at the network or adapter level.

  • Test Connectivity and Failover: Although not a PowerShell one-liner, a practical verification is to ensure that VMs and the host (if applicable) have network connectivity through the new SET switch. You can create or attach a test VM to the “HyperV-TeamSwitch” and assign it an IP to ping out. Try disconnecting one of the physical NIC cables (or disabling one NIC) and ensure traffic continues on the remaining NIC (the ping should continue without dropping more than a packet or two during failover). This tests the failover aspect of the team. For load balancing verification, you might monitor the NIC traffic counters (using Performance Monitor or Get-NetAdapterStatistics) while generating load from multiple VMs to see that both NICs in the team are carrying traffic. This is more of a manual test, but it confirms that the SET team is functioning as expected in both load balancing and redundancy.

  • Review Event Logs (if needed): The Hyper-V Virtual Switch will log events if something is misconfigured. After setting up, check the System event log for any Hyper-V Networking or VMSMP warnings/errors. For example, an event about “an LBFO team may not be attached” would indicate an attempt to use LBFO where not supported (which our configuration avoids by using SET). No such errors should be present if the SET team is correctly configured for the vSwitch.

By performing the above verification steps, you can be confident that the transition to SET was successful. The Get-VMSwitchTeam output is the clearest confirmation – it shows that your Hyper-V switch is indeed using SET with the intended NICs​(Link). You should see SwitchTeam information indicating SwitchIndependent mode and either Dynamic or HyperVPort load balancing (with all members listed). This confirms you’re using the supported configuration on Windows Server 2022 (since an LBFO team would not appear here – instead, Get-NetLbfoTeam would list it if one existed, but in our case we bypass LBFO). Once verified, your Hyper-V host’s networking is now running on Switch Embedded Teaming, which is the Microsoft-endorsed solution moving forward. This ensures you can take advantage of the latest Hyper-V networking performance features and that you’re in line with the support policy for Windows Server 2022 Hyper-V​(Link).

 

Sources:

  • Microsoft Docs – Features removed or deprecated in Windows Server 2022: Hyper-V switch no longer supports LBFO teams​.
  • Microsoft Docs – Windows Server Supported Networking Scenarios: Introduction of Switch Embedded Teaming (SET) for Hyper-V and SDN​.
  • Microsoft Docs – Azure Stack HCI Networking (Host network requirements): SET overview and requirements (symmetric NICs, up to 8 adapters, supported algorithms)​.
  • Microsoft Docs – Hyper-V PowerShell ReferenceSet-VMSwitchTeam parameters (SwitchIndependent only, LB algorithms HyperVPort/Dynamic)​.
  • Microsoft Docs – Hyper-V PowerShell Reference: Using Get-VMSwitchTeam to view SET team members​

Sunday, 12 January 2025

Integrating BBC RSS Feed into Home Assistant Dashboard Using Feedreader

Integrating BBC RSS Feed into Home Assistant Dashboard Using Feedreader

To display the latest news item from the BBC RSS feed on your Home Assistant dashboard using the feedreader integration, follow these steps:

1. Add the Feedreader Integration via the Home Assistant UI:

  1. Navigate to Settings > Devices & Services.
  2. Click on Add Integration.
  3. Search for and select Feedreader.
  4. When prompted, enter the RSS feed URL:
    • https://feeds.bbci.co.uk/news/rss.xml?edition=uk
  5. Complete the setup by following the on-screen instructions.

2. Create Helper Entities:

You'll need to create helper entities to store the title, description, link, and publication date of the latest news item.

  1. Navigate to Settings > Devices & Services > Helpers.
  2. Click on Create Helper and select Text.
    • Name it Newsitem Title.
    • Repeat this process to create two more text helpers named Newsitem Description and Newsitem Link.
  3. Create a Datetime helper:
    • Name it Newsitem Date Time.

3. Set Up Automation to Process New Feed Entries:

Create an automation that updates the helper entities when a new feed entry is detected.

  1. Navigate to Settings > Automations & Scenes.
  2. Click on Create Automation and choose Start with an empty automation.
  3. Configure the automation as follows:

    Trigger:

    • Trigger Type: Event
    • Event Type: feedreader
    • Event Data:
      • feed_url: https://feeds.bbci.co.uk/news/rss.xml?edition=uk

    Actions:

    • Action Type: Call Service
      • Service: input_text.set_value
      • Target: input_text.newsitem_title
      • Value: {{ trigger.event.data.title }}
    • Add similar actions for input_text.newsitem_descriptioninput_text.newsitem_link, and input_datetime.newsitem_date_time, setting their values to {{ trigger.event.data.description }}{{ trigger.event.data.link }} and {{ trigger.event.data.published }}, respectively.

4. Add a Markdown Card to Your Dashboard:

To display the latest news item on your dashboard, add a Markdown card with the following content:

  1. Navigate to your dashboard and click on Edit Dashboard.
  2. Click on Add Card and select Markdown.
  3. In the Content field, enter:
    **[{{ states('input_text.newsitem_title') }}]({{ states('input_text.newsitem_link') }})**
    
    {{ states('input_text.newsitem_description') }}
    
    _Published on: {{ as_datetime(states('input_datetime.newsitem_date_time')).strftime('%B %d, %Y %H:%M') }}_
  4. Click Save to add the card to your dashboard.

5. Test the Setup:

To test without waiting for a new RSS feed entry, you can simulate a feedreader event using the Developer Tools:

  1. Navigate to Developer Tools > Events.
  2. In the Event field, enter feedreader.
  3. In the Event Data field, input a JSON object that mimics the data structure of a real feed entry, for example:
    {
      "feed_url": "https://feeds.bbci.co.uk/news/rss.xml?edition=uk",
      "title": "Sample News Title",
      "description": "Sample news description.",
      "link": "https://www.bbc.co.uk/news/sample-news",
      "published": "2025-01-12T13:32:46+00:00"
    }
  4. Click Fire Event to simulate the event.
  5. Check if the helpers are updated accordingly.

After setting up, check the Logs under Settings > System > Logs to ensure there are no errors related to the feedreader integration or the automation.

By following these steps, your Home Assistant dashboard will display the latest news item from the BBC RSS feed using the feedreader integration.

Friday, 20 December 2024

Search for an IP Address in the Last 7 Days of Windows Security Event Logs

This PowerShell script allows you to filter Windows Security event logs for a specific IP address, focusing on events from the past 7 days. The results are saved to a CSV file for further analysis.







The Script

# Define the IP address and output CSV file path $ipaddress = "10.1.1.1" $outputFile = "C:\SecurityEvents_Last7Days.csv" # Define the start date (7 days ago) $startDate = (Get-Date).AddDays(-7) # Extract the events from the last 7 days and export to CSV Get-WinEvent -LogName Security -FilterXPath "*[EventData[Data[@Name='IpAddress']='$ipaddress']]" | Where-Object { $_.TimeCreated -ge $startDate } | Select-Object TimeCreated, Id, Message | Export-Csv -Path $outputFile -NoTypeInformation -Encoding UTF8 # Notify user of completion Write-Output "Events from the last 7 days successfully exported to $outputFile"

Key Features

  1. Filters by IP Address: Searches for events where the IP address matches the specified value.
  2. Time Range: Limits results to events that occurred in the last 7 days using the TimeCreated property.
  3. CSV Output: Saves event details (timestamp, ID, and message) to a specified CSV file.

How to Use It

  1. Replace 10.1.1.1 with the target IP address.
  2. Save the script to a .ps1 file or run it directly in PowerShell with administrator privileges.
  3. Locate the output file (C:\SecurityEvents_Last7Days.csv) for review.

Script Workflow

  1. Input Definition: The $ipaddress variable holds the IP address, and $outputFile specifies the CSV file location.
  2. Time Range Setup: $startDate is calculated as 7 days prior to the current date.
  3. Event Filtering: Get-WinEvent retrieves log entries matching the IP address. Where-Object ensures only events from the past 7 days are included.
  4. Data Export: Selected details are saved to the CSV file for analysis.

Practical Applications

  • Security Monitoring: Quickly identify events tied to suspicious IP activity.
  • Incident Investigation: Focus on recent logs for faster issue resolution.
  • Data Analysis: Exported CSV files can be reviewed in Excel or other tools.

Conclusion

This script is a concise, efficient way to analyze recent security events related to a specific IP address. Adjust the IP and time range as needed for your specific use case, and use the exported data to inform your network security actions.

Monday, 9 December 2024

Understanding Microsoft SQL Index Fragmentation and How to Manage It

Introduction

Indexes in SQL Server play a crucial role in improving query performance by allowing faster access to data. However, over time, these indexes can become fragmented, leading to slower queries and increased system resource usage. In this post, we’ll explore what index fragmentation is, its types, and how to address it effectively.

What Is Index Fragmentation?

Index fragmentation occurs when the logical order of data pages in an index no longer matches their physical order on disk. This misalignment can cause SQL Server to work harder to retrieve ordered data, negatively impacting performance.

Types of Fragmentation:

  1. Internal Fragmentation:

    • Happens when data pages contain excessive free space, often due to page splits during inserts or updates.
    • Leads to inefficient use of storage and additional I/O operations.
  2. External Fragmentation:

    • Occurs when the logical sequence of pages doesn’t align with their physical storage order.
    • Results in extra effort for SQL Server to return ordered results.

When to Reorganize or Rebuild Indexes

To manage fragmentation, SQL Server provides two options:

  • Reorganize:
    • A lightweight, online operation that defragments the index at the leaf level by reordering pages.
    • Minimal system resource usage and can be safely interrupted.
  • Rebuild:
    • A more intensive process that completely recreates the index, removing fragmentation.
    • Can be done online or offline, depending on your SQL Server edition.
    • Requires more resources but provides thorough optimization.

Key Considerations for Online Index Rebuilds:

  • Enterprise Edition: Supports online rebuilds, allowing uninterrupted access to data.
  • Standard and Other Editions: Requires offline rebuilds, during which data access is temporarily restricted.

Best Practices

  • Reorganize when fragmentation levels are between 5% and 30%.
  • Rebuild when fragmentation exceeds 30%.

These thresholds may vary depending on workload and system specifics. Regular monitoring of fragmentation levels helps maintain optimal performance.

------------Code----------

SET NOCOUNT ON;

-- Create a temporary table for results
IF OBJECT_ID('tempdb..#Fragmentation') IS NOT NULL DROP TABLE #Fragmentation;

CREATE TABLE #Fragmentation (
    DatabaseName NVARCHAR(128),
    TableName NVARCHAR(128),
    IndexName NVARCHAR(128),
    IndexType NVARCHAR(60),
    AvgFragmentationPercent FLOAT,
    FragmentCount INT
);

-- Declare variables
DECLARE @DBName NVARCHAR(128);
DECLARE @SQL NVARCHAR(MAX);

-- Iterate through all databases
DECLARE dbCursor CURSOR FOR
SELECT name FROM sys.databases
WHERE state_desc = 'ONLINE' AND name NOT IN ('master', 'tempdb', 'model', 'msdb');

OPEN dbCursor;
FETCH NEXT FROM dbCursor INTO @DBName;


WHILE @@FETCH_STATUS = 0
BEGIN
    SET @SQL = '
    USE [' + @DBName + '];
    INSERT INTO #Fragmentation
    SELECT 
        DB_NAME() AS DatabaseName,
        OBJECT_NAME(ips.object_id) AS TableName,
        i.name AS IndexName,

        CASE 
            WHEN i.type = 1 THEN ''Clustered Index''
            WHEN i.type = 2 THEN ''Non-Clustered Index''
            WHEN i.type = 3 THEN ''XML Index''
            ELSE ''Unknown''
        END AS IndexType,

        ips.avg_fragmentation_in_percent AS AvgFragmentationPercent,

        ips.page_count AS FragmentCount

    FROM 
        sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, ''LIMITED'') ips

    JOIN 
        sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id

    WHERE 
        ips.avg_fragmentation_in_percent > 5

    AND 
        ips.page_count > 1;';

    EXEC sp_executesql @SQL;
    FETCH NEXT FROM dbCursor INTO @DBName;
END;

CLOSE dbCursor;
DEALLOCATE dbCursor;

-- Display results
SELECT * FROM #Fragmentation
ORDER BY DatabaseName, AvgFragmentationPercent DESC;

-- Clean up
DROP TABLE #Fragmentation;


Sunday, 25 August 2024

Understanding and Implementing BIMI TXT Records

Brand Indicators for Message Identification (BIMI) is an innovative standard that empowers brands to showcase their logo in email clients that are compatible with BIMI. This feature not only bolsters brand recognition but also fosters trust among email recipients. Here's a concise guide on what BIMI TXT records are and how to utilize them.

A BIMI TXT record is a string of text incorporated into your domain's DNS records. It contains the URL of your logo file, which should be a Scalable Vector Graphics (SVG) file.

To establish a BIMI record, you initially need an SVG logo file uploaded to your domain's web storage. Subsequently, you will need to create a TXT record with the following content:

v=BIMI1;l=[your SVG file URL]

This simple step allows your brand's logo to appear in supporting email clients, enhancing your brand's visibility and trustworthiness.

Saturday, 17 August 2024

Windows Memory compression (More RAM at the expanse of CPU)

In its 10525 build, Windows 10 introduced a feature known as Memory Compression also included in Windows 11. This feature aims to optimize the utilization of your system’s physical memory and reduce the need for disk-based pagefile IO operations.

Memory Compression works by compressing infrequently accessed pages and retaining them in a new compression store within the physical RAM. This process allows your PC’s RAM to store more data than its original capacity, which can enhance your system's performance.

For instance, if your PC has 8 GB of RAM available, and there’s 9 GB of data to be stored on it, Memory Compression will attempt to compress the extra data so it fits within the 8 GB capacity of your RAM. Without Memory Compression, your PC would store the extra data in a file on your hard drive storage, which can slow down your PC as it takes more time to read data from a file on the hard drive than from RAM.

While Memory Compression can improve performance, it does use more CPU resources. If you notice a lot of compressed memory and think it’s slowing down your PC, there are a couple of solutions. One solution is to install more physical memory (RAM). This will allow your system to store more data in RAM without needing to compress it, reducing the CPU usage associated with Memory Compression.

If installing more RAM is not feasible, you can disable Memory Compression. Here’s how:

  1. Open the Command Prompt as an administrator.
  2. Type the following command and press Enter: `Disable-MMAgent -mc`
  3. Restart your computer.

In conclusion, Memory Compression is a feature designed to optimize your system's performance by making efficient use of your RAM. It's a tool that can be beneficial, but like all tools, it's important to understand how it works and when to use it.