The other day I needed to test if a registry key was present on an end user’s computer and make it if it didn’t exist. I performed a registry key value test with PowerShell. Since I was doing more than one, I pulled an older tool from my tool box for this one. It’s small but easy to use.
This script is only a “Try Catch” with some added inputs. We are first grabbing what we want to test in our parameters. We have two mandatory strings and a switch. The first string is for the path in which we are going to test. The second is the value we want to test. for example, if we want to see if google earth has a version number, we would give the path of HKLM:\Software\Google\Google Earth Pro and the value of Version. If we want to see that version we flip the next part which is the only switch, show value. Instead of saying true or false it will say the value.
Inside our try-catch box, we are using the Get-ItemProperty command. We select the $Value. Finally, we stop the command using the error action flag stop. This prevents the command from falling apart. Next, we throw all that information into the $Values parameter.
Aftward, we use a basic if statement. We show the value when the “showvalue” flag is set. However, if it’s not, we just send a true. Finally, the catch will tell us a false statement if the Get-ItemProperty command failed for any reason.
Conclusion
There are other ways to do this, but this was the quickest I have found. I added the show Value recently because I needed it for troubleshooting the code. Overall, this little guy is perfect to add to any script that deals with registry changes.
While reading on Reddit, I found a common thread. People need a quick way to do a Share Point File Audit. I have a PowerShell function for this in my toolbox. This tool heavily uses the Search-UnifiedAuditLog command let. The most common items I tend to audit are file modifications and deletions. This function goes through, modified, moved, renamed, downloaded, uploaded, accessed, synced, malware detection, restored from trash, locked, and finally unlocked. The Search-UnifiedAuditLog is an exchange online command at the time of this writing. Thus, you need to connect to exchange online. In this function, I am using the switch command. I will follow that structure for the breakdown. Lets first jump in with the function.
I’m glad you came to the breakdown. It means you want to know how the code works. This means you truly care about learning. Thank you. This code repeats itself a few times in different ways. So, I will call out the differences, but not the likes after the first time explaining something. The first section is our Parameters.
Parameters
We have 8 Parameters, and only one of them is mandatory. Firstly, we have the Type parameter. This mandatory validate set allows you to select from a list of commands we will be using in this function.
Deleted
Modified
Created
Moved
Renamed
Downloaded
Uploaded
Synced
Accessed
MalwareDetected
Restored
Locked
UnLocked
Afterward, we have Keep Alive. This allows us to run the command multiple times without signing back into the system. So, if you want to keep your session alive, flip that flag. Next, we have two switches. The first Switch is to pull only items edited in SharePoint itself. The next is for one drive. They are named accordingly. After that, we have a start date and an end date. These values are nullable. Basically, you don’t need them. The outfile is asking for just the name of the file. We are using the “./” to save it wherever you run the command from. Finally, we have the result size. If you want the max number of results, 5000. However, you can make this number smaller.
Begin
In our begin section, we want to test the Exchange Online Management Module. Secondly, we want to validate exchange connectivity. After that, we want to gather the date information for the start and end dates. Let’s take a look at the exchange part first.
The Get-Module command works with PowerShell 5.1. However, I have seen PowerShell flak with this command failing to pull the information. I am going to assume your PowerShell is up to date with your current version.
Afterward, we want to install the exchange online management module if we don’t detect the module. We are using the count to see how many objects are inside our module variable. If it’s 0, it’s time to install. We install it from the PSGallery.
Now, we test exchange connections. We use the Get-PSSession to review the current connections. Next, we test if the connections with the name “ExchangeOnlineInternalSession” is greater than zero. “isconnected” will produce a true or false statement.
If ($isconnected -ne "false") {
try {
Connect-ExchangeOnline
} catch {
Write-Error "Exchange Online Failed. Ending"
end
}
}
After which, we can test with. False, we try to connect. However, if there is an error, we end the script and let the user know. We are not using a credential object to authenticate because MFA should always be a thing.
#Auto Generates Start and Finish dates
if ($Null -eq $StartDate) { $StartDate = ((Get-Date).AddDays(-89)).Date }
if ($Null -eq $EndDate) { $EndDate = (Get-Date).Date }
#Tests if end date is before start date.
if ($EndDate -lt $StartDate) { $StartDate = ((Get-Date).AddDays(-89)).Date }
if ($EndDate -gt (Get-Date).Date) { $EndDate = (Get-Date).Date }
Afterward, we need to get the dates right. If the start date is null, we are going to pull 90 days back. We do this by using the standard. We do the same with the end date. If it’s null, we grab today’s date. Now to prevent errors, we check the start date and end date. The end date can’t be before the start date. This is similar to the end date. The end date can’t be greater than the current date. We use the if statement to resolve this.
Process
We begin the process by looking directly at our “Type” variable by using a switch command. The switch allows us to go through each “Type” and run the commands accordingly. Let’s look at one of the switch processes.
The data that search-unifiedauditlog produces a section called “AuditData”. This section has almost every piece of information you will need. The difference between each “Type” will be the Operations, and session id. The operations target the required logs. This creates the backbone of the Share Point File Audit. The graph below will show which operations I am using. Once you gather the operation information, we need to pull the AuditData. This data will be in JSON format. We start off by looping the records with a for each loop. Then we pull the auditdata and pipe it into convertfrom-json. Next, we create our PS Custom Object. Other than Moved, the output of the other logs contains almost the same information. See the script for the information.
Operation Filters
Deleted
FileDeleted
FileDeletedFirstStageRecycleBin
FileDeletedSecondStageRecycleBin
FileVersionsAllDeleted
FileRecycled
Modified
FileModified
FileModifiedExtended
Moved
FileMoved
Renamed
FileRenamed
Downloaded
FileDownloaded
Uploaded
FileUploaded
Synced
FileSyncDownloadedFull
FileSyncUploadedFull
Accessed
FileAccessed
FileAccessedExtended
MalwareDetected
FileMalwareDetected
Restored
FileRestored
Locked
LockRecord
UnLocked
UnlockRecord
End
Finally, it’s time for the end block. This is where we will present the data we have gathered. Firstly, we need to determine if the SharePoint or Onedrives were flipped or not.
Here we checking if both flags are not checked or if both flags are checked. Then we check if the user gave us a filename. If they did, we export our report to a csv file wherever we are executing the function from. However, if the user didn’t give us a filename, we just dump all the results.
Now, if the user selected either or, we present that information. We present those infos by using a where-object. Like before we ask if the user produced an outfile. Finally, we ask if keep alive was set. If it wasn’t we disconnect from the exchange.
Conclusion
In conclusion, auditing shouldn’t be difficult. We can quickly pull the info we need. I hope you enjoy this powerful little tools.
Today, in the field, we will discover a user’s monitors through PowerShell. This is very useful for any kind of RMM tool. I used this script with PDQ and Continuum. You can also run this script with backstage. Let’s take a look at the script for Monitor Discovery.
The first part of this script is grabbing information from the root namespace. We are looking at the wmi monitor ID information. We are using the get-ciminstance command to grab this information. the wmimonitorid produces encoded data streams. Finally, I pipe the output into a “for each object” loop. This loop is extremely important. By holding everything inside this loop, products like azure PowerShell terminal or Continuum Connectwise see everything after it as part of the initial line.
Aftward, we build out our adapter list inside the loop. I was about to find this list thanks to magnumdb. We set the interface number as the variable name and the type as that variable’s value. This way we can use a simple period call for the type later.
Next, it’s time to grab additional information. Firstly, we are grabbing the instance name. Later, you will see why.
$Instance = $_.InstanceName
Notice we are using the dollar sign followed by an underline. This means grabbing the previous piped item. In our case, the previous item is the wmimonitorid information.
Aftward, we are grabbing the basic sizes of the screen. We are doing this using the wmi object called wmimonitorbasicdisplayparam. Let me untie my tongue after that one. This cim instance produces all of the monitors’ information at once. Thus, we need to filter with a where-object. This command produces the same instance name as the previous command. That’s why we gathered that information beforehand.
Before we continue, I want you to look closely. Notice that we have inside the where-object a $_. already for the instance name. The $_ is calling the previous pipe. In this case that is the wmimonitorbasicdisplayparams. This is why we created the $Instance variable.
Certainly, we will use this same trick with our next piece of information. We will query the wmimonitorconnectionparams using the same method. However, we want only the video output technology variable. This is why we wrap the command inside a parenthesis.
Finally, we need to build the output. This is what the end user will see and interact with. This is accomplished by using the ps custom object. Lets talk about each variable as there are special conditions in the code here.
Manufacturer
We are using the system text encoding to decode the wmi monitor id data. It will be looking at the manufacturername. After decoding, we want to trim the data.
Certainly, like the previous variable, we are going to do the same thing with the name. Using the System text encoding, we get the string and trim down the userFriendlyName information.
Afterward, we work with the size. In this case we are going to do some math. The $Size variable are not encoded like the previous commands. Thus we can do some basic math. We add the max horizontal image size to the max vertical image size. Then we divide that by 2.54. Finally, we round that. This code is a little ugly.
Finally, we will use the adapter types from above. It’s as simple as using the $adaptertypes.”$connections”. See, I told you that you will see this later.
Monitor Discovery is not out of reach. PowerShell is the tool that brings this to life without costing additional money. I have used this code to grab hundreds of computer monitor information. Then I used a web scraper to grab the prices of the monitors. At the end of the day, I produced a list of computer monitors and their prices. This gave us a total number to give insurance.
At a previous company, we had to maintain windows updates without WSUS. This caused some unique complexities. Back then, all machines in question were Microsoft Surface Tablets. This means that driver updates were important. Thus, I created a one-liner to update windows. In today’s post, we will go over Windows Updates with PowerShell. Using PowerShell allows you to use tools like backstage or scripts to install updates on remote machines quickly. The first part of this post will be how to do it manually and then the final part is oneliners. PSWindowsupdate is the module we will be using.
Warnings
Today’s code has the ability to install all windows updates. This includes updates blocked by different software. Thus, reviewing the updates and being confident in what you are updating are essential to success.
The Manual Breakdown
Once you are connected to a machine that you want to do windows updates with PowerShell, start a PowerShell session. Each step from here own will help make a clear and clean method.
Execution Policy
Set-ExecutionPolicy - ExecutionPolicy Bypass
This command allows you to install modules and any other items in PowerShell. The PSWindowsUpdate will require the execution policy to be at least set to bypass. You can learn more about execution policies here. Note, you must be running PowerShell in an evaluated prompt for this code to work.
Nuget
Install-PackageProvider Nuget -Force
After setting the execution policy, we might need to update the package provider. Making a single-line script becomes a challenge because of this. With this knowledge, we want to force an installation of the newest package provider.
The next piece is to install the pswindowsupdate module. This module is the module that does our heavy lifting. Here is where we will need to use the force and confirm flags.
Import PSWindowsUpdate
Import-Module PSWindowsUpdate
Now we have the module. It is time to import the module. Importing a module does not need additional input.
Getting the Windows Update
Get-WindowsUpdate -MicrosoftUpdate
It’s time to get the updates.Here is where we grab the KB information. This is where Windows Updates with Powershell Happens. This is where you can find updates to research. It’s important to know what you are updating.
This command will install the KB that you wish without asking any questions. You will see a fancy update process bar during this time.
One-Liner Commands to Install Windows Updates With PowerShell
The following are single-line commands. These commands will install all the updates according to their purpose. The following commands have the ability to break your system. One example of this is the BitLocker update that bricked machines recently. The following command will install all the KB updates.
This command will install all updates on the machine. This includes the KB Microsoft and vendor updates. Please be aware of any dangerous updates that are in the wild. The following command will install those as well.
Custom Compliance Policy Scripts will change how you build out compliance policies. In order to make a Custom Compliance Policy Script, you first must have Intune. You can review the licenses here. Once you have the proper licensing you should be able to log into the endpoint manager. The first thing we will need is a PowerShell script and a Json for this policy.
Custom Compliance Policy Scripts
The first thing we are going to do is build out the script. In this example, we are going to test for Sentinel One and Pearch. There are two ways we can do this, We can test if the services are installed or we can test to see if the product is installed. We will be using the services because the Product is a slower process. The output needs to be a compressed JSON file. Time to start building.
Firstly, we get the services with Get-Service.
#Grabs Services
$Services = Get-Service
Since we have the services, we can start testing against the collected info. We are looking for the services for perch and sentinel one. We will search for these two services using a where-object command. Below are the two services we are looking for.
Perch = Perch-Auditbeat
Sentinel One = SentinelAgent
We will wrap this where-object command inside an if..then statement. The output we are looking for is True or False.
Finally, we sort and we send the results to a json file using the sort-object and convertto-json commands.
#Returns the Service
$ReturnHash | Sort-Object -Property name | ConvertTo-Json -Compress
Make sure to save this script as a PS1 because we will have to upload it later.
Custom Compliance Policy JSON
The next step is to create a custom compliance JSON file for your PowerShell. The JSON responds to the supported operators of IsEqual and the supported datatype of boolean. You can learn more about how to build your JSON here. Below is the JSON file.
Under the basics tab, we need to fill in some information.
Name it something that makes sense. For example, S1 and Perch Script
The description is where you need to explain what’s going on. If you have any links, this is where you should add them.
The publisher is going to be yourself unless you are pulling the script from someone else.
Afterward, Click next to go to the Settings tab. Inside this tab, you will need to add the script that we wrote above. Just in case you forgot it.
#Grabs Services
$Services = Get-Service
#Checks the services
$ReturnHash = @{
Perch = if ($Services | where-object { $_.name -like "perch-auditbeat" }) { $true } else { $false }
S1 = if ($Services | where-object { $_.name -like "SentinelAgent*" }) { $true } else { $false }
}
#Returns the Service
$ReturnHash | Sort-Object -Property name | ConvertTo-Json -Compress
The script needs to run as the computer since we are pulling from the services. We don’t need to enforce the script signature check and we are running the PowerShell 64 because it’s 2022. Once you have these set, click next to go to the review page. Review the settings and click create. The script takes about 2 minutes to show up in the Compliance Policies scripts.
Building the Custom Compliance Policy
Navigate back to Endpoint manager > Devices > Compliance Policies. Click the Create Policy button on the top of the screen. A “Create a policy” sidebar will appear. Set the platform to “Windows 10 and Later” and click create.
Basic Tab
This is where we will name the policy and give it a good description. The name needs to be unique and helps the end user understand what’s going on. I have named this one S1 and Perch Script Policy and described what it is doing in the description.
Compliance Settings Tab
Here is where we will be adding the script that we created in the previous step. Firstly, click the required toggle box. Next, click the “Click to Select”. A side “select a discovery script” will appear. Find your script and select it here. After that, we need to upload the JSON file. Click the blue icon and select your file. Additional information will appear.
Action for NonCompliance
Aftward, click the next button to bring yourself to the Action for NonCompliance tab. There is one default item in this list. Here you can do things like, emailing the user or marking them as noncompliant. You can even retire the machine after so many days. You can add message templates and more. For us, we are using the default setup at the 7 days mark.
Assignments
Comparatively, click next. The next tab is the assignments tab. This is where you can select your groups. In this example, I selected the windows 10 groups. To learn how to set up a dynamic windows 10 group, you can go here. To add a group, all you have to do is click Add Groups under the included. If you want to exclude a group, then add a group under excluded groups.
Review and Create
Finally, once you have the pieces put together, we can review them. If you see any errors, go back and fix them accordingly. If not, Click create. It can take a few minutes for the Custom Compliance Policy Scripts to show up in the main menu. Give it time.
Conclusion
Creating a Custom Compliance Policy Scripts will change how you Intune. It has changed how I Intune. The more you dig the deeper this rabbit whole will go. Take time and enjoy reading each thing you can do. It makes the world of custom compliance policies with PowerShell scripts different.