Scheduled Tasks with PowerShell

Scheduled Tasks with PowerShell

Last week we went over how to do audits using PowerShell (Link). Today we will use scheduled tasks with PowerShell to have the audit script run hour by hour. We do this because we don’t want to be manually running the PowerShell script every hour. Let the computer handle all of that for us. We will go over how to manually build the Scheduled Task and the PowerShell way.

Manual Process – Scheduled Tasks

Lets take a look at the manual process. We are placing our AuditDisabledAccounts.ps1 script on the computer. I like placing things in the c:\scripts or c:\temp folder. Sometimes this is good, sometimes this is bad. It depends on the world you are working in.

  1. Start Task Scheduler
  2. Click Task Scheduler Library.
  3. Right Click and select basic task
  4. Name it accordingly. I am naming mine “Hourly Disabled AD Audit.”
  5. Under Triggers, I selected When the computer starts.
    • This scheduled task will repeat itself with another setting. It’s best to get it started when the computer starts. This way if the system restarts, it will start again. It can become confusing over time.
  6. The action will be start a program
    • Program: Powershell
    • Arguments: -NoProfile -ExecutionPolicy Bypass -HoursBack 1 -Servers AD1,AD2,AD3 -OutCSVfile “C:\Reports\DisabledAccountsAudit.csv”
    • Start In: c:\temp\AuditDisabledAccounts.ps1
  7. To finish, you want to open the properties dialog

Now we have a basic scheduled task setup. Next we want to have it trigger every hour. Sense we opened the properites you can now do just this.

  1. On the general tab
  2. Radio check: “Run whether the user is logged on or not.”
    • If you need to change the user, this is where you will do that.
  3. Click the Triggers tab.
  4. You will see at startup, click edit
  5. Under advanced Settings
    • Check Repeat task every
    • Select 1 hour
    • Duration: Indefinitely
  6. Click ok

That’s how you manually setup a Scheduled Task for PowerShell.

Powershell Method

Now we can do a Scheduled Tasks with PowerShell. We will be using the scheduledtask commands to create the task accordingly. Lets take a look at the script itself.

The script – Scheduled Tasks with PowerShell

# Variables
$ScriptPath = "C:\temp\AuditDisabledAccounts.ps1"
$TaskName = "Audit Disabled Accounts"
$OutCSVfile = "C:\Reports\DisabledAccountsAudit.csv"
$Servers = "AD1,AD2,AD3"
$HoursBack = 1
$User = Read-Host -Prompt "Domain\Username"
$Creds = Read-Host -AsSecureString -Prompt "Enter Password" 

$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($Creds)
$UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
[Runtime.InteropServices.Marshal]::ZeroFreeBSTR($BSTR)


$triggers = 0..23 | ForEach-Object {
    New-ScheduledTaskTrigger -At "$($_):00" -Daily
}


$principal = New-ScheduledTaskPrincipal `
    -id 'Author' `
    -UserId "$User" `
    -LogonType Password `
    -RunLevel Limited
    

$Action = New-ScheduledTaskAction `
    -Execute "PowerShell" `
    -Argument "-NoProfile -ExecutionPolicy Bypass -File `"$ScriptPath`" -HoursBack $HoursBack -Servers $Servers -OutCSVfile `"$OutCSVfile`"" `
    -WorkingDirectory 'C:\temp\'

$Task = New-ScheduledTask `
    -Description 'Usered To Audit Disabled Accounts' `
    -Action $Action `
    -Principal $principal `
    -Trigger $triggers

Register-ScheduledTask `
    -TaskName "$TaskName" `
    -TaskPath '\' `
    -Action $Action `
    -Trigger $triggers `
    -User $User `
    -Password "$UnsecurePassword"

The breakdown

The first thing we do is setup. We want to have the script, the name, the out file for our audit report, our servers, and the hours back we want to go.

Veriables

# Variables
$ScriptPath = "C:\temp\AuditDisabledAccounts.ps1"
$TaskName = "Audit Disabled Accounts"
$OutCSVfile = "C:\Reports\DisabledAccountsAudit.csv"
$Servers = "AD1,AD2,AD3"
$HoursBack = 1
$User = Read-Host -Prompt "Domain\Username"
$Creds = Read-Host -AsSecureString -Prompt "Enter Password" 

The first thing we want is the veriables. We want the path of the script. We want it’s name. Where our CSV files will be dropped, the servers, how many hours back, usernames and passwords. Notice that the User is using a read-host and creds is using a secure string. This is to help stop shoulder surfers and powershell memory. This way you password isn’t passed around. Basicly, we input the password as a secure string, and it becomes a veraible. Thus, if someone is looking through the powershell history, or is monitoring it with something like defender, then they will not see the password. Only the veraible from this point on.

Decoding Passwords as Veriables

Part of the Scheduled Tasks with PowerShell is we need to register the task later. This means that the password needs to be plain text. However, we don’t want a password to ever exist in the shell visability. So we want to decode it directly into a Veriable.

$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($Creds)
$UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
[Runtime.InteropServices.Marshal]::ZeroFreeBSTR($BSTR)

The code above allows you to convert the secure string to normal text in Powershell 5. If you are using powershell 7, this isn’t a problem. But most servers are still defaulting at 5. The new veriable name is UnsecurePassword which has the password as plain text for the register command.

Triggers – Scheduled Task for powershell

We need to start making the triggers. Unlike the gui, we can’t setup a startup with a hourly repeat. Instead, the safeist way is to do an hourly thing for repeating the hour. We do this using the new-scheduledtasktrigger command.

$triggers = 0..23 | ForEach-Object {
    New-ScheduledTaskTrigger -At "$($_):00" -Daily
}

Since we have 24 hours in a day, we want to repeate a foreach-object loop 24 times. We start at 0 and go to 23 which makes 24. Wow… Anyways, As we loop, the $_ will be the number. So we create a new trigger at that time and set it to daily. All of this will be dumped into the $triggers array.

Principal

Next we want to setup a user account. The command for this is…. Yep, you guessed it, New-ScheduledTaskPrincipal. Here we are setting the ID to the author, using our user flag, doing the logontype as password, and the runlevel is limited. We don’t want it to have full access to anything since it’s not doing anything on the local PC. Notice the ` symbol. This allows you to do mulitlple lines with one command. It’s like break here and continue to the next line. It makes reading code so much easier.

$principal = New-ScheduledTaskPrincipal `
    -id 'Author' `
    -UserId "$User" `
    -LogonType Password `
    -RunLevel Limited

Actions

Next we need to do our actions. AKA, what’s it going to do. Using the New-scheduledTaskAction we want to execute with powershell and push our arguments in. Using our Veriables, we fill in the blanks. It’s very straight forward. The secret sause here is the arguments will be like you did with the gui approach.

$Action = New-ScheduledTaskAction `
    -Execute "PowerShell" `
    -Argument "-NoProfile -ExecutionPolicy Bypass -File `"$ScriptPath`" -HoursBack $HoursBack -Servers $Servers -OutCSVfile `"$OutCSVfile`"" `
    -WorkingDirectory 'C:\temp\'

Tasks

Next we need to make the task itself. We are going to use the New-ScheduledTask command. This part of the command creates a task object that will need to be registered. We give it the description we want. The Actions from above. The user inside the principal names and the triggers we built out.

$Task = New-ScheduledTask `
    -Description 'Usered To Audit Disabled Accounts' `
    -Action $Action `
    -Principal $principal `
    -Trigger $triggers

Register The Task

Finally, we want to register the task in question. We are going to use “Register-scheduledTask” to do this. Notice that this is where we are using that password we used at the start. It’s used as a variable, and thus it’s never shown in the PowerShell history.

Register-ScheduledTask `
    -TaskName "$TaskName" `
    -TaskPath '\' `
    -Action $Action `
    -Trigger $triggers `
    -User $User `
    -Password "$UnsecurePassword"

Additional Thoughts on Scheduled Tasks with PowerShell

This technique is very powerful. I built out a script that scanned the local network via Get-NetNeighbor. The script was a scheduled task and it grabbed all the devices. Imagine having admin rights, pushing out a script that scans the local network drops a scheduled task on another computer that scans that network. You could map out a whole network within a few minutes. This could be used as a worm and it’s a good reason to block WMI on the network except from the machines that does the administration.

What can we learn as a person?

It’s always a good idea to have routine. Having a Scheduled task in your life that you like tends to improve our lives. For example, I like going to a monthly meetup with my friends. It’s something I look forward to. Having it on my calendar helps. This is why vacations are important. We need to have those things on our calendar. It’s ok to have them on the calendar. So, find something you can look forward to, and put it on the calendar.

Additional Resources

Intune Detection Script

Intune Detection Script

Hi there! Have you ever scratched your head and wondered if you loaded software the right way? You’re not by yourself. This gives a lot of system administrators a headache. This is especially hard to do when handling programs like AutoCAD 2022 in a variety of settings. That is where Microsoft Intune really shines. The fact that you can use your own recognition scripts makes it very useful. A custom Intune detection script is key.

These scripts save my life a lot. They help you check every network gadget. This makes sure that not only is there an app, but it’s also the right version for you. Today, we’re going to look in detail at a PowerShell script that can find AutoCAD 2022. This guide will help make your business life a little easier, no matter how much you know about Intune or how new you are to it. Allow us to begin on our Intune detection script!

How do I make a Intune Detection Script?

First, what does a custom Intune recognition script really mean? It’s just a script for your control tool for Microsoft Intune. It checks automatically to make sure that all of your devices have the same version of software loaded. What makes this cool? Because it takes care of one of the most boring jobs in IT management automatically. Imagine making sure that software is compliant and installations are correct without having to check each machine by hand. Not interested!

PowerShell is used to make custom scripts like the one we’re talking about today. It is a strong programming language that can do a lot with just a few lines of code. These scripts can get into the Windows Registry, find loaded programs, and check out different versions of installed programs. It’s not just about saving time; it’s also about making sure that your software deployments work well and stay stable. We all hate those crazy support calls, but this cuts down on them.

The Breakdown

Getting into the nitty-gritty of our PowerShell script, let’s break it down line by line. This will help you understand exactly what each part does. Let’s get our geek on!

The Script

$ProductName = "AutoCAD 2022"
$ProductVersion = "24.1.173.0"
$RegPath = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall", "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall"
$apps = Get-ChildItem -Path $RegPath 
$Test = foreach ($app in $apps) {
    $app | Get-ItemProperty | Where-Object {($_.DisplayName -like "$ProductName")} | select-object *
}
if ($Test.displayversion -ge "$ProductVersion") {
    write-host "Installed - $($test.DisplayVersion)"
    exit 0
} else {
    exit 1
}

Lets go line by line in our Intune Detection script and break it down.

Line 1-2: Define the Product

These two lines allow you to define the product you want to search for and the Version you wish to check for. The product name can take wild cards, but I don’t suggest it as it can cause more conflicts than be helpful.

$ProductName = "AutoCAD 2022"
$ProductVersion = "24.1.173.0"

Line 3: Setting the Registry Path

The next line is where we look in the registry for the uninstall strings and product information. These registry keys is what win32_product looks at to get information. Thus, it’s much faster than using the win32_product.

$RegPath = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall", "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall"

Line 4: Gather Installed Applications

Here, we’re grabbing a list of all items in the paths defined earlier. It’s akin to gathering all the potential treasure chests. We will use these magical coins later to get what we need.

$apps = Get-ChildItem -Path $RegPath

Lines 5 – 7: filter and test for the product

In these lines, we loop through each app and check if it matches our product name. If it does, we take a closer look at its properties. Here we are checking for our gold coins vs the silver coins. We take each of the products we want, and put it into our test varaible, or chest.

$Test = foreach ($app in $apps) {
    $app | Get-ItemProperty | Where-Object {($_.DisplayName -like "$ProductName")} | select-object *
}

Lines 8-12: Check Version and Provide Output

Assuming you have chosen a name that will only show up once, we now check to see if the version matches up. If it does, then we say, yep, it’s installed and exit with a code of ZERO, the big 0. If it doesn’t, then we exit with the error code of 1. This is important as Intune is looking for a string and an error code of 0 for success.

if ($Test.displayversion -ge "$ProductVersion") {
    write-host "Installed - $($test.DisplayVersion)"
    exit 0
} else {
    exit 1
}

To Deploy Your Script with Intune

Intune’s custom detection script deployment requires more than copying and pasting code. Ensure the script operates smoothly on all targeted devices. Step-by-step instructions:

  1. The first step in script preparation is to test it locally. You shouldn’t distribute something without testing on your own machines.
  2. Put the script in Intune:
    • Enter the Microsoft Endpoint Manager admin center.
    • Select Windows 10 under Devices > Scripts > Add.
    • PowerShell script upload and settings. This involves choosing a system or user context for the script based on access level.
  3. Assign script:
    • After uploading your script, assign it to device groups. You can choose groups by organizational units or other deployment parameters.
  4. Monitor script deployment:
    • Monitor script execution on the script profile’s Device Status and User Status tabs after deployment. This shows if the script is executing properly or if any devices are failing.
  5. Update as needed:
    • Monitoring feedback may need script or deployment parameters changes. Maintaining compatibility with new system updates or IT environment changes may need regular updates.

Effective script deployment guarantees that all network devices meet software standards. Assuring all machine parts are well-oiled and working together.

    Common Issues and Troubleshooting Tips for a Intune Detection Script

    Even with the best preparation, things might not always go as planned. Here are some common issues you might face with custom Intune scripts and how to troubleshoot them:

    1. Script Fails to Execute:
      • Check Execution Policy: Ensure that the script’s execution policy allows it to run. This policy can sometimes block scripts if not set to an appropriate level.
      • Review Script Permissions: Make sure the script has the necessary permissions to access the registry paths or any other resources it uses.
    2. Incorrect Script Output:
      • Verify Script Logic: Double-check your script’s logic. Look for typos in variable names or incorrect operators in conditions.
      • Test Locally: Always run the script locally on a test machine before deploying it to avoid simple errors.
    3. Issues with Script Deployment:
      • Assignment Errors: Make sure the script is assigned to the correct device groups. Incorrect assignments can lead to the script not being run where it’s needed.
      • Check Intune Logs: Use the logs provided by Intune to identify what’s going wrong when the script runs.

    Troubleshooting is an integral part of managing scripts in a large environment. It’s a little like detective work, where you need to keep a keen eye on clues and sometimes think outside the box.

    What can we learn as a person today?

    Even though we don’t always mean it that way, we frequently execute “scripts” in our day-to-day lives, much like a PowerShell script checks for certain conditions before proclaiming success or failure. These are the things we do on a regular basis without thinking, like automated checks on a computer system; they help us evaluate and respond to the many opportunities and threats that life presents.

    When we look for patterns in our own lives, we can see what’s working and what isn’t. By exercising first thing in the morning, for instance, you may find that you get more done that day. This would be an example of a positive pattern, like a script that verifies everything is going according to plan. In contrast, if you find yourself feeling low after a session of social networking, it’s a sign that something needs to be changed, similar to a script fault.

    It is essential to listen to environmental feedback in order to make modifications. Our emotional and physiological responses, the opinions of others around us, and the outcomes we attain can all serve as sources of this type of feedback. Like adjusting a screenplay that isn’t working as planned, when our life’s routines bring about less ideal consequences, it’s a warning to halt and re-calibrate. Perhaps it necessitates reevaluating our current habits and deciding how much time is best spent on specific pursuits.

    The idea is to embrace learning and refining as a process, just like scripts that are updated over time. There is no instruction manual for life, and sometimes the only way to learn is by making mistakes. Being self-aware and willing to make adjustments for the better is more important than striving for perfection.

    Additional Resources

    PowerShell App Deployment Toolkit

    PowerShell App Deployment Toolkit

    Over the years of Intune deployments, I have searched for a way to let my end users know that an application is being installed or uninstalled from their computer. I have used things ranging from notification bubbles to blanking a screen. All of these methodologies are poor at best. I found a few paid items that companies just didn’t want to pay for due to the insanity of the pricing. For example, one company wanted us to pay 150 USD per deployment. Times that by 1500 devices, that adds up quickly. It wasn’t until I found the PowerShell App Deployment Toolkit that I found what I was finally looking for.

    What is the PowerShell App Deployment Toolkit?

    This toolkit is an immensely powerful and amazingly simple setup. You can download the tool kit here. It provides a framework to install and uninstall applications using PowerShell through a signed application. This allows us to deploy complex and confusing deployments as a single package. A good example would be AutoCAD. Recently, I was tasked with standardizing AutoCAD in a single department. Some members used AutoCAD 2016, some used 2024. This was a problem as the 2024 files did not work with the 2016 AutoCAD. Thus, I needed to uninstall the previous versions of AutoCAD before I installed the current version. As all files are backed up, I didn’t have to worry about them losing any files. The toolkit was perfect for this.

    Key items I like of the toolkit

    Simple packaging

    Many application toolkits come with complex packaging. It’s normally an application that wraps itself around another application that keeps doing this until it’s all transparent. With the PowerShell App Deployment Toolkit, all you need to interact with is the deploy-application.ps1 file. That’s assuming you are doing more than an a MSI file. If you are only using an MSI file, all you need to do is drop the file in.

    As you can see in the screen shot, this is the package. When you download the zip file, you will be greeted with this amazing structure. The Deploy-Application.ps1 is where our code will go. The Files folder is where the installer files would go. Following our auto cad example, the installer and updates would all be placed inside the Files folder.

    Deploy-Application.ps1

    This file has an amazing setup. It first has a wall of documentation inside the file itself. The file explains each step along the way. It is broken up into installation, uninstallation, and repair. Each section has a Pre, during and post process in each section. This is great if you need to kill some services, send a message or more. It’s also helpful because it gives you a structure to work within.

    The Commands

    Inside this packaging there are many useful commands. As I stated in the intro, it’s full of ways to communicate what you are doing with the end user. During an application install, you can show which applications needs to be turned off for the install to work by using the show-installationwelcome command.

    Show-InstallationWelcome -CloseApps 'acad,adSSO,AutodeskDesktopApp,AdAppMgrSvc,AdskLicensingService,AdskLicensingAgent,FNPLicensingService' -CloseAppsCountdown 60
    

    This example shows us it wants to close the applications and gives the user a 60 second window to do so. This isn’t the only thing this command can do.

    SYNTAX
        Show-InstallationWelcome [-CloseApps <String>] [-Silent] [-CloseAppsCountdown <Int32>] [-ForceCloseAppsCountdown 
        <Int32>] [-PromptToSave] [-PersistPrompt] [-BlockExecution] [-AllowDefer] [-AllowDeferCloseApps] [-DeferTimes 
        <Int32>] [-DeferDays <Int32>] [-DeferDeadline <String>] [-MinimizeWindows <Boolean>] [-TopMost <Boolean>] 
        [-ForceCountdown <Int32>] [-CustomText] [<CommonParameters>]
        
        Show-InstallationWelcome [-CloseApps <String>] [-Silent] [-CloseAppsCountdown <Int32>] [-ForceCloseAppsCountdown 
        <Int32>] [-PromptToSave] [-PersistPrompt] [-BlockExecution] [-AllowDefer] [-AllowDeferCloseApps] [-DeferTimes 
        <Int32>] [-DeferDays <Int32>] [-DeferDeadline <String>] -CheckDiskSpace [-RequiredDiskSpace <Int32>] 
        [-MinimizeWindows <Boolean>] [-TopMost <Boolean>] [-ForceCountdown <Int32>] [-CustomText] [<CommonParameters>]
    

    Other commands like execute-process, will launch processes that you need from the file directory and more. All while logging what’s going on. You can find a full help system for all the unique commands inside the tool kit. Navigate to the tool kit > AppDeployToolkit > AppDeployToolkithelp.ps1 will bring up a gui that allows you to read all about the commands.

    Using the Toolkit with Intune

    If you want the tool kit to work with the end user profile, then you will need to grab a unique little tool from MDT. We will need the ServiceUI.exe from the MDT software. You can download MDT here. Once you have the MDT installed. we need to pull the ServiceUI.exe out of the MDT install. Navigate to, C:\Program Files\Microsoft Deployment Toolkit\Templates\Distribution\Tools\x64 and copy the ServiceUI.exe file. Place this file in the home of your PowerShell App Deployment Toolkit file structure.

    As you can see, the ServiceUI.exe is in the root folder. Now we need to create the package. We can create a win32 app package. I covered this here. This is the same concept.

    • The folder would be the folder with your toolkit
    • The setup file would be the Deploy-Application.exe
    • The output file would be wherever you want the Intune app to be dumped.
    • and we don’t need to catalog the folder.

    Once you have your application built, it’s time to see how it works inside Intune. We start by building your application package. As stated in the previous blog, we start the application by uploading. The big difference here is our install and uninstall commands.

    Understanding the commands

    Our install command will be the using the ServiceUI.exe and the deploy-application.exe

    • Install: ServiceUI.exe -process:explorer.exe Deploy-Application.exe
    • Uninstall: ServiceUI.exe -process:explorer.exe Deploy-Application.exe -DeploymentType “Uninstall” -DeployMode “Interactive”

    By default, the Deploy-application.exe will be interactive. There are two flags for the Deploy-Application and here are what they are.

    • DeploymentType: (Super Straight forward)
      • Install: Installs the application
      • Uninstall: Uninstalls the application
      • Repair: repairs the application.
    • DeployMode:
      • Interactive: Shows all of the prompts needed.
      • NonInteractive: Only shows the required prompts.
      • Silent: Shows no prompts.

    We can translate the command above by using these flags. By default the Deploy-application.exe is install and interactive. So, we know that the application would be prompted and the end user will see the command. The uninstall command will uninstall and it will be interactive. The ServiceUI.exe allows you to run applications as the user in and the system at the same time. The biggest issue with the ServiceUI.exe is the application will not install until someone logs in. No flags are needed here.

    Over all, PSappdeploytoolkit changes the ball game with deployments. I encourage anyone and everyone to dig deeper into it.

    What can we learn as a person today?

    I live in the south of United states. From time to time I will hear people battling over belief systems. In my life time I have come to an understanding of how these systems work. I liken “objective truth” as fish in a sea. Our belief systems is the net we use to capture those fish. Some nets are better than others. The water of the sea is useless, distracting, or misinformation. It only makes it harder to bring those pieces of the objective truth into ourselves. A good net can capture a lot of fish, and let the water out at the same time. A bad net, like a tarp, captures some but becomes unmanageable due to the water. This is the same way with our beliefs. We are only strong enough to lift so much at different points in our lives.

    Premade Nets

    I see organized religions as premade nets. Think of it like a tool kit. It’s a format that is easy to use and allows you to do stuff with it. Does the toolkit work for everyone, no. Just like this PowerShell toolkit, it would be useless in a world without powershell. So chromeOS, this toolkit isn’t useful. This is the same with some beliefs. They are useful where they are, but not useful in other places. Sometimes these toolkits/nets, are useful for some but not others. If you don’t know PowerShell, this toolkit wouldn’t be useful to you. If you are shame sensitive, some religions are not for you.

    Everyone has their own tool set or net. No single tool set is inherently bad. It’s how we use them and where we use them. If you take a net to a small pond, get ready to waste your time and damage your net. If you throw your net aggressively into a aggressive sea, get ready to lose that net.

    Homemade Nets

    Once someone understands how the nets are made and how to repair them, It’s always best for them to start building their own nets using the techniques they have used on their previous nets. By having a net/toolset of your own, this allows you to have full knowledge and be able to repair quickly. This belief system would be uniquely yours and different from others. So, when it breaks, you can grow it, replace parts, and more without having any problems. It’s yours and no one elses.

    Let’s build our own beliefs.

    General Uninstaller for Intune

    General Uninstaller for Intune

    This past month I was given a task to Uninstall a few applications in Intune. However, the app’s uninstall feature did not work according to plan. However, a bunch of these worked with the cim methods of uninstall. Which I thought was funny. After writing a bunch of the same code over and over again, I decided to write a General Uninstaller for Intune. This also requires a custom Detection Script.

    The General Uninstaller Script

    param (
        [string[]]$ProductNames
    )
    $Products = Get-CimInstance -ClassName win32_Product
    foreach ($Product in $ProductNames) {
        if ($null -eq ($Products | where-object {$_.name -like "$Product"})) {
            write-host "Success"
            exit 1212
        } else {
            #Grabs the install Location
            $InstallLocation = ($Products | where-object {$_.name -like "Product"}).InstallLocation
    
            #Uninstalls the product in question
            $Products | where-object {$_.name -like "Product"} | Invoke-CimMethod -MethodName uninstall
            
            if ($Null -ne $InstallLocation) {
                foreach ($Location in $InstallLocation) {
                    if (Test-Path $Location) {
                        Remove-Item -Path $Location -Force -Recurse
                    }
                }
            }
            exit 1212
        }
    }
    

    Here we have a general uninstaller for Intune. This script allows us to feed the product name as is or we can add wild cards to the name. We start off the script by grabbing the product names from the user. This will be done during the intune setup. When it deploys, the first thing this script does is grab all the applications inside the win32_product. If the application didn’t register in this windows system, then this script is going to be pointless for you.

    Once we have the products, we go through each Product Name. We first check to see if the product is on the system. If it isn’t, we output success and exit with a unique exit code. This will be used later. However, if the product is on the machine, we grab the install location. Then we pipe the product into the uninstall method in the cim method command. Finally, we see if the install location exists inside the installed object. Some applications give us this information some don’t. Some give us multiple locations while others don’t.

    To work around this, we check if install location property is null. If it isn’t null, then we move on and start a loop. The loop is only because some install locations have more than one location. Then we test if the file path still exists. Sometimes, the applications uninstaller will remove the folder, sometimes, it doesn’t and that’s why we test. If the file location is there still, we remove it with a good old force and recurse. Finally, we exit with the unique exit code.

    The General Uninstall Detection Script

    $ProductNames = "ProductName","Product2Name"
    $Products = Get-CimInstance -ClassName win32_Product
    foreach ($Product in $ProductNames) {
        if ($null -ne ($Products | where-object {$_.name -like "$Product"})) {    
            exit 1
        } 
    }
    write-host "Success"
    exit 0
    

    With any custom script installs or uninstalls, a custom detection script is necessary. The first step is to grab the product names. Just like before, it’s a list of strings. So they can do more than one. Then we grab all the products with our cim instance and win32_product. Then we loop through each product name and see if the product exists still. If it does, we exit with a 1. This basically says, I failed! Intune needs a string and an exit code of 0 to be successful. The exit of 1 without the string ends the script and without that string, intune assumes failure. However, if we go through them all, and none trigger the exit, then we are safe to exit with a 0 and the beautiful word success.

    Building it out in Intune.

    Building the IntuneWin File

    The first thing you will need to do is save your script into a folder and then download the WinIntuneApp, aka, Win32 Prep Tool, to package up the powershell script. Unpackage this tool and start up your command prompt. The application will guide you through the process of setting up a intunewin app.

    General Uninstaller for Intune
    1. Please specify the source folder: This is the folder that will have your script inside of it. If you wanted to create something more complex, this part would change your way of deployment. Future blog post coming.
    2. Please Specify the setup file: This is going to be the powershell name. General-Uninstall.ps1
    3. please specify the output folder: This is the folder that the intunewin file will be dropped.
    4. Do you want to specify catalog folder (Y/N)? This one is for more advanced packages. We can say no to this option for this setup.

    Setting Up Intune for Your Uninstaller

    Now we have the IntuneWin file. It’s time to setup the intune Deployment. This is where you will be able to add things like the productname to our General Uninstaller for Intune.

    • Navigate to Endpoint Manager
    • Click Apps
    • Click Windows
    • Click Add
    • Click the Select App Package File.
    • Add the General-Uninstall.IntuneWin file.
    • Click ok
    • Change the Name
    • Click the edit Description and add a detailed description for other users. Make sure to provide instructions on what to do with the detection script.
    • The publisher can be your company or in my case self.
    • The gategory is going to be computer management as it is a general uninstaller.
    • Feel free to add any additional information. Link this blog post if you wish for the information URL.
    • Click Next when finished.

    The next screen is programing.

    • Install command:
    PowerShell.exe -ExecutionPolicy Bypass -File .\General-Uninstall.ps1 -ProductName "*Product Name*"
    
    • The Uninstall command can be as simple as a removal file.
    • Device Restart Behavior: Determine behavior based on return codes
    • Return Codes: Remember that unique exit code we had in the script. This is where you would place that code. I have it as 1212 is a success.

    The next screen the requirement screen. We can do a lot with this screen, but we don’t need to here.

    • Operating System Architecture:
      • 32
      • 64
    • Minimum Operating System: Windows 10 1607.

    Now we need to setup the custom detection.

    • Select User A custom Detection Script
    • Validate your product names to be uninstalled.
    • Upload and click next.
    • Accept the defaults for Dependencies and Supersedences.

    The final screen is where you are able to assign the script to people. There are three sections. Required, aviable for enrolled devices and uninstall. This is where you will select who is going to get what.

    Testing, Monitoring, and deployment

    The assignment area is where you assign the script to who you want. This is very important. Here is where you would want to test the users. Have a test group and apply it first. H

    • Deploy the uninstall app to the test device group.
    • Monitor the Intune deployment status for the app to ensure successful deployment to devices/users.
    • Test if the application is still on a target computer. This can be done with control pannel, powershell, and more options.
    • Redefine and correct any issues and restart the testing.
    • Deploy

    What can we learn as a person today?

    When was the last time you threw a rock? How about a rock in a lakes? The last time you did, did you notice the ripples? Just like a deployment like this can cause ripples in your company, removing things from your life can cause just as many ripples in yourself. Make sure you are ready to let go of that thing you are holding onto. It’s always a good idea to test it out, or have a support group to help you. Those ripples can do some damage. So be ready to Uninstall parts of your life before you do it.

    Additional Reading

    WordPress in Docker with LDAP

    WordPress in Docker with LDAP

    A few weeks ago, we built WordPress in Docker. Today I want to go deeper into the world of docker. We will be working with a single WordPress instance, but we will be able to expand this setup beyond what is currently there over time. Unlike last time we will be self-containerizing everything and adding plugins along with the LDAP php which doesn’t natively come with the WordPress:Latest image. It’s time to build an WordPress in Docker with LDAP.

    Docker Files

    As we all know docker uses compose.yml files for it’s base configuration. This file processes the requested image based on the instructions in the compose. Last time we saw that we could mount the wp-content to our local file system to edit accordingly. The compose handles that. This time we are going about it a little differently. The compose file handles the configuration of basic items like mounting, volumes, networks, and more. However, it can’t really do much in the line of editing a docker image or adding to it. The compose file has the ability to call upon a build command.

    services:
      sitename_wp:
        build:
          context: .
          dockerfile: dockerfile
    

    The build is always within the service that you want to work with. the Context here is the path of the build. This is useful if you have the build files somewhere else like a share. Then the dockerfile will be the name of the build. I kept it simple and went with docker file. This means there are now two files. the docker-compose.yml and this dockerfile.

    What are the Dockerfile

    The docker file takes an image and builds it out. It has some limitations. The dockerfile can add additional layers that adds to the over all size of the image. Non-persistence is the next problem, by it’s ephemeral nature, it disappears after it’s first use. The file can only do a single threaded execution. Thus, it can’t handle multiple things at once. It’s very liner in it’s nature. If than, and other structures are not present in the docker file. This makes it hard for it to be a programing language. There are limits to versioning.

    The docker file cannot work with networking or ports. There is no user management inside the dockerfile process. Complexity is a big problem with these files as the more complex, the harder it is to maintain. Never handle passwords inside the dockerfile. The docker file can’t handle environmental variables. The thing that hit me the hardest, limited apt-get/yum commands. Build context is important as dockerfiles can slow down performance. Finally, dockerfile’s may not work on all hosts.

    With those items out of the way, docker files can do a lot of other good things like layering additional items to a docker image. The container treats these files as root and runs them during the build. This means you can install programs, move things around and more. It’s time to look at our dockerfile for our WordPress in Docker with LDAP.

    The Dockerfile

    # Use the official WordPress image as a parent image
    FROM wordpress:latest
    
    # Update package list and install dependencies
    RUN apt-get update && \
        apt-get install -y \
            git \
            nano \
            wget \
            libldap2-dev
    
    # Configure and install PHP extensions
    RUN docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ && \
        docker-php-ext-install ldap
    
    # Clean up
    RUN rm -rf /var/lib/apt/lists/*
    
    # Clone the authLdap plugin from GitHub
    RUN git clone https://github.com/heiglandreas/authLdap.git /var/www/html/wp-content/plugins/authLdap
    
    # Add custom PHP configuration
    RUN echo 'file_uploads = On\n\
    memory_limit = 8000M\n\
    upload_max_filesize = 8000M\n\
    post_max_size = 9000M\n\
    max_execution_time = 600' > /usr/local/etc/php/conf.d/uploads.ini
    

    The Breakdown

    Right off the bat, our FROM calls down the wordpress:latest image. This is the image we will be using. This is our base layer. Then we want to RUN our first command. Run commands like to have the same commands. Remember, every command is ran as the container’s root. The first RUN command will contain two commands. The APT-Get Update and the install. We are installing git, this way we can grab a plugin, nano, so we can edit files, wget, for future use and our php ldap.

    apt-get update &&\
    apt-get install -y git nano wget libldap2-dev
    

    Please notice the && \. The \ means to treat the next line as part of this command. The && means and. The && allows you to run mulitple commands on the same line. Since each RUN is a single line, this is very important. The libldap2-dev is our ldap plugin for php. Our next RUN edits the docker php extension.

    The Run Commands

    RUN docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ && \
    docker-php-ext-install ldap
    

    docker-php-ext- * is a built in scripts to our WordPress image. We tell the configure where our new libraries are located for the PHP. Then we tell php to install the ldap plugin. After we have it installed, we need to do some clean up with the next RUN command.

    rm -rf /var/lib/apt/lists/*
    

    At this point, we have WordPress in Docker with LDAP php modules. Now I want a cheap easy to use plugin for the ldap. I like the authldap plugin. We will use the git command that we installed above and clone the repo for this plugin. Then drop that lpugin into the WordPress plugin folder. This is our next RUN command.

    git clone https://github.com/heiglandreas/authLdap.git /var/www/html/wp-content/plugins/authLdap
    

    In our previous blog, we used a printf command to make a upload.ini file. Well, we don’t need that. You can do this here. We trigger our final RUN command. This time it will be echo. Echo just says stuff. So we echo all the PHP settings into our uploads.ini within the image.

    # Add custom PHP configuration
    RUN echo 'file_uploads = On\n\
    memory_limit = 8000M\n\
    upload_max_filesize = 8000M\n\
    post_max_size = 9000M\n\
    max_execution_time = 600' > /usr/local/etc/php/conf.d/uploads.ini
    

    Docker Compose

    Now we have our Dockerfile built out. It’s time to build out our new docker compose file. Here is the compose file for you to read.

    version: '3.8'
    
    services:
      sitename_wp:
        build:
          context: .
          dockerfile: dockerfile
        ports:
          - "8881:80"
          - "8882:443"
        environment:
          WORDPRESS_DB_HOST: sitename_db:3306
          WORDPRESS_DB_USER: ${WORDPRESS_DB_USER}
          WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
          WORDPRESS_DB_NAME: ${MYSQL_DATABASE}
          WORDPRESS_AUTH_KEY: ${WORDPRESS_AUTH_KEY}
          WORDPRESS_SECURE_AUTH_KEY: ${WORDPRESS_SECURE_AUTH_KEY}
          WORDPRESS_LOGGED_IN_KEY: ${WORDPRESS_LOGGED_IN_KEY}
          WORDPRESS_NONCE_KEY: ${WORDPRESS_NONCE_KEY}
          WORDPRESS_AUTH_SALT: ${WORDPRESS_AUTH_SALT}
          WORDPRESS_SECURE_AUTH_SALT: ${WORDPRESS_SECURE_AUTH_SALT}
          WORDPRESS_LOGGED_IN_SALT: ${WORDPRESS_LOGGED_IN_SALT}
          WORDPRESS_NONCE_SALT: ${WORDPRESS_NONCE_SALT}
        volumes:
          - sitename_wp_data:/var/www/html
        depends_on:
          - sitename_db
        networks:
          - sitename_net_wp
    
      sitename_db:
        image: mysql:5.7
        volumes:
          - sitename_wp_db:/var/lib/mysql
        environment:
          MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
          MYSQL_DATABASE: ${MYSQL_DATABASE}
          MYSQL_USER: ${MYSQL_USER}
          MYSQL_PASSWORD: ${MYSQL_PASSWORD}
        networks:
          - sitename_net_wp
    
    networks:
      sitename_net_wp:
        driver: bridge
    
    volumes:
      sitename_wp_data:
      sitename_wp_db:
    

    The WordPress in Docker with LDAP breakdown

    First thing first, Notice everywhere you see the name “sitename”. To use this docker correctly, one must replace that information. This will allow you to build multiple sites within their own containers, networks and more. As stated before, the first thing we come accross is the build area. This is where we tell teh system where our dockerfile lives. Context is the path to the file in question and dockerfile is the file above.

    Next, is the ports. We are working with port 8881:80. This is where you choose the ports that you want. The first number is the port your system will reach out to, the second number is the port that your container will understand. Our SSL port is 8882 which is the standard 443 on the containers side.

    ports:
          - "8881:80"
          - "8882:443"
    

    Next are the enviromental veriables. If you notice, some of the items have ${codename} instead of data. These are veriables that will pull the data externally. This approach prevents embedding the codes inside the compose file. The volume is the next part of this code. Instead of giving a physical location, we are giving it a volume. Which we will declare later. Next, we state the wordpress page is dependant on the mysql image. Finally, we select a network to tie this container to. The process is the same for the database side.

    Finally, we declare our network with the networks. This network will have it’s own unique name, as you see the sitename is within the network name. We set this network to bridge, allowing access from the outside world. Finally we declare our volumes as well.

    The hidden enviromental file

    The next file is the enviromental file. For every ${codename} inside the docker, we need an envorimental veriable to match it. Some special notes about the salts for WordPress. The unique symbols, such as a $ or an =, in the code injection cause the docker to break down. It is wise to use numbers and letters only. Here is an example:

    WORDPRESS_DB_USER=sitename_us_wp_user
    WORDPRESS_DB_PASSWORD=iamalooserdog
    MYSQL_ROOT_PASSWORD=passwordsareforloosers
    MYSQL_DATABASE=sitename_us_wp_db
    MYSQL_USER=sitename_us_wp_user
    MYSQL_PASSWORD=iamalooserdog
    WORDPRESS_AUTH_KEY=4OHEG7ZKzXd9ysh5lr1gR66UPqNEmCtI5jjYouudEBrUCMtZiS1WVJtyxswfnlMG
    WORDPRESS_SECURE_AUTH_KEY=m0QxQAvoTjk6jzVfOa8DexRyjAxRWoyq08h1fduVSHW0z2o4NU2q7SjKoUvC3cJz
    WORDPRESS_LOGGED_IN_KEY=9LxfBFJ5HyAtbrzb0eAxFG3d9DNkSzODHmPaY6kKIsSQDiVvbkw0tC71J98mDdWe
    WORDPRESS_NONCE_KEY=kKMXJdUTY0b6xZy0bLW9YALpuNHcZfow6lDZbRqqlaNPmsLQq45RhKdCNPt34fai
    WORDPRESS_AUTH_SALT=IFt5xLir4ozifs9v8rsKTxZBFCNzVWHrpPZe8uG0CtZWTqEBhh9XLqya4lBIi9dQ
    WORDPRESS_SECURE_AUTH_SALT=DjkPBxGCJ14XQP7KB3gCCvCjo8Uz0dq8pUjPB7EBFDR286XKOkdolPFihiaIWqlG
    WORDPRESS_LOGGED_IN_SALT=aNYWF5nlIVWnOP1Zr1fNrYdlo2qFjQxZey0CW43T7AUNmauAweky3jyNoDYIhBgZ
    WORDPRESS_NONCE_SALT=I513no4bd5DtHmBYydhwvFtHXDvtpWRmeFfBmtaWDVPI3CVHLZs1Q8P3WtsnYYx0
    

    As always, grab your salts from an offical source if you can make it work, Here is the WordPress Official source site. You can also use powershell to give you a single password, take a look here. Of course, replace everything in this file with your own passwords you wish. If you have the scripting knowledge, you can auto-generate much of this.

    Bring Docker to Life

    Now we have all of our files created. It’s finally time to bring our creation to life. Run the following command:

    docker compose up -d
    

    If you notice, there are additional information that appears. The dockerfile will run and you can watch it as it runs. if there are errors, you will see them here. Often times, the erros will be syntex issues. Docker is really good at showing you what is wrong. So, read the errors and try finding the answer online.

    What can we learn as a person today?

    Men are born soft and supple; dead, they are stiff and hard. Plants are born tender and pliant; dead, they are brittle and dry. Thus whoever is stiff and inflexible is a disciple of death. Whoever is soft and yielding is a disciple of life. The hard and stiff will be broken. The soft and supple will prevailLao Tzu

    In seeking assistance from forums like the sysadmin subreddit or Discord channels, I often encounter rigid advice, with people insisting on a singular approach. This rigidity echoes Lao Tzu’s words: “Men are born soft and supple; dead, they are stiff and hard… The hard and stiff will be broken. The soft and supple will prevail.” In professional settings, flexibility and adaptability are crucial. Entering a new company with an open mindset, ready to consider various methods, enables us to navigate around potential obstacles effectively. Conversely, inflexibility in our career, adhering strictly to one method, risks stagnation and failure. Embracing adaptability is not just about avoiding pitfalls; it’s about thriving amidst change. Lao Tzu’s wisdom reminds us that being pliant and receptive in our careers, much like the living beings he describes, leads to resilience and success.

    Docker and WordPress

    Docker and WordPress

    It’s time to build on our Docker knowledge. WordPress is a powerful web platform that a large part of the internet is built on. This site is built on WordPress. Whenever I am working on a site for a friend, I will build myself WordPress and then create their site there in my test environment. When I get it the way I want it, I move it and destroy the original. The best way to destroy the original is to wipe it from existence. This is where Docker and WordPress are friends.

    Docker and WordPress

    This method will allow you to have multiple WordPress sites with your docker image. The reason we want to be able to do this is because this allows us to test between site actions and more. It’s one of those amazing little tools that saves so much time. Before that, we want to do some basic things to get everything setup. The first thing is the networking. We want to build a network in docker for our WordPress sites. We do this outside of the compose because making if/then statements in a compose is a mess. This also allows you to have multiple networks and so on and so forth. We do this with the command “Docker Network Create”. Of course, you want to be using the docker user or sudo user.

    docker network create dockerwp
    

    Docker Compose File

    Now we have our docker network built, we need to build our compose file. Inside the folder you keep all of your dockers, I Suggest making a new folder called wordpress and moving into that folder. Then create a docker-compose file using the nano command.

    mkdir wordpress
    cd wordpress
    nano docker-compose.yml
    

    Next you will want to copy and past the docker compose below into it.

    version: "3.8"
    
    services:
      sitename-db:
        image: mysql:latest
        volumes:
          - ./sitename_db/data:/var/lib/mysql
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: AmazingPasswordOfAwesomeness
          MYSQL_DATABASE: sitename_wp_db
          MYSQL_USER: sitename_wp_user
          MYSQL_PASSWORD: AnotherAmazingPassword
    
      sitename-wp:
        image: wordpress:latest
        depends_on:
          - site1-db
        volumes:
          - ./sitename_wp/wp-content:/var/www/html/wp-content
          - ./sitename_wp/uploads.ini:/user/local/etc/php/conf.d/uploads.ini
          # Add other files or folders that you want to override here e.g. stylesheets
        ports:
          - "8880:80"
        restart: always
        environment:
          WORDPRESS_DB_HOST: sitename-db:3306
          WORDPRESS_DB_NAME: sitename_wp_db
          WORDPRESS_DB_USER: sitename_wp_user
          WORDPRESS_DB_PASSWORD: AnotherAmazingPassword
    
    networks:
    	dockerwp:
    	  name: dockerwp
    	  external: true
    

    From there, you can run the command “docker compose up -d” to create the wordpress page with the default settings. I don’t suggest it, but you can. How can you use this docker compose? Firstly, replace wherever you see “sitename” with the sites name you want. If you want more than one, you can copy the db and wordpress sections over and over again. Each time replacing the site name with something different. Make sure to change those amazing passwords.

    How does this compose work?

    This docker compose works by creating individual worlds for each site. The word sitename allows you to rename everything the way you want. So if you wanted therandomadmin_com-db, that can happen. if you want therandomadmin_org-db that can happen to. Each one can have it’s own name. This is what splits them apart. The network they share allows them to talk with each other and back out again. Uploads.ini allows the sites to have their own custom upload counts. I will go over that in just a minute. Just imagine them as little cups with two unique coins. As long as they are named the same they can talk to each other. If you wanted to, you can take it a step farther and make a new network for each compose. However, that can get messy quick trying to herd all of those networks into one place.

    Next steps

    The volumes part of the compose services creates folders. Each folder is important because it holds the content for that container. Notice in the wordpress volumes. You will see a ./sitename_wp/uploads.ini. This is very important as it controls how much data can be uploaded. Each site has it’s own. Thus, you can use the command below to create a simple file in each container. To activate those files, restart the container.

    printf "file_uploads = On\nmemory_limit = 64M\nupload_max_filesize = 64M\npost_max_size = 64M\nmax_execution_time = 600" > ~/uploads.ini
    

    This command will create the ini file that tells the system how much you can upload. I have it set to 64 megabtyes, but you can set it to whatever you want. By default, the size limitation is 2mb. Which is extremely small for now day images.

    Finally, you can use the nginx reverse proxy system to assign the ssl to each site as you see fit. I personally don’t do this as I don’t expose the site to the outside world, but you can do so. The instructions were covered in the previous blog about ladder. Believe it or not, that’s it. The next few steps would be to go to the site’s ip or hostname whichever you choose and set up your wordpress like normal.

    What can we learn as a person today?

    Recently I went to a Tech networking event where I met multiple new and unique people. I enjoyed every minute of talking about tech with each of them. While talking to them, I learned of new ways to use my dormant skills. Things like body language, mental health knowledge, and even down to my cooking was improved. We talked about things like IT, AI, and the color of the sky in some cases. It was a pleasure. Later I was the one helping others on a local discord server. We talked about the day and things we needed.

    What spoke to me while working on this blog post was each WordPress has it’s own container and it’s own world, but the network is the same. This allows the WordPress installs to talk with each other and share items easliy. That’s the same way we are as humans. We are all unique in our own ways. I can be someone who enjoys reading a good white paper about mind bind while someone else can enjoy reading a good book at how pepsi cola is made. We are all different. What we have in common is our networks.

    Without our networks, we can’t go far. Imagine the WordPress hosted on it’s own network, but that network can’t leave your lab. Would it be useful to the outside world? How about this site? What if I locked it down so only 1 other IP address could read it. This blog wouldn’t be helpful to you. This is how our networking is. If we lock down ourselves to only one group of people, we can’t grow and they can’t grow. This is often times how cults are made. They lock themselves down to only themselves and whoever they can recruit.

    Think about it

    As you go throughout your week this week, think about your networks. If you go to church, that’s a network, if you go to school, that’s a network. How about your discord friends? That’s a network as well. Each place has it’s own network, even if that place is temporary like a store. What can you bring to those networks, and what can you learn from those networks?