PowerShell App Deployment Toolkit

PowerShell App Deployment Toolkit

Over the years of Intune deployments, I have searched for a way to let my end users know that an application is being installed or uninstalled from their computer. I have used things ranging from notification bubbles to blanking a screen. All of these methodologies are poor at best. I found a few paid items that companies just didn’t want to pay for due to the insanity of the pricing. For example, one company wanted us to pay 150 USD per deployment. Times that by 1500 devices, that adds up quickly. It wasn’t until I found the PowerShell App Deployment Toolkit that I found what I was finally looking for.

What is the PowerShell App Deployment Toolkit?

This toolkit is an immensely powerful and amazingly simple setup. You can download the tool kit here. It provides a framework to install and uninstall applications using PowerShell through a signed application. This allows us to deploy complex and confusing deployments as a single package. A good example would be AutoCAD. Recently, I was tasked with standardizing AutoCAD in a single department. Some members used AutoCAD 2016, some used 2024. This was a problem as the 2024 files did not work with the 2016 AutoCAD. Thus, I needed to uninstall the previous versions of AutoCAD before I installed the current version. As all files are backed up, I didn’t have to worry about them losing any files. The toolkit was perfect for this.

Key items I like of the toolkit

Simple packaging

Many application toolkits come with complex packaging. It’s normally an application that wraps itself around another application that keeps doing this until it’s all transparent. With the PowerShell App Deployment Toolkit, all you need to interact with is the deploy-application.ps1 file. That’s assuming you are doing more than an a MSI file. If you are only using an MSI file, all you need to do is drop the file in.

As you can see in the screen shot, this is the package. When you download the zip file, you will be greeted with this amazing structure. The Deploy-Application.ps1 is where our code will go. The Files folder is where the installer files would go. Following our auto cad example, the installer and updates would all be placed inside the Files folder.

Deploy-Application.ps1

This file has an amazing setup. It first has a wall of documentation inside the file itself. The file explains each step along the way. It is broken up into installation, uninstallation, and repair. Each section has a Pre, during and post process in each section. This is great if you need to kill some services, send a message or more. It’s also helpful because it gives you a structure to work within.

The Commands

Inside this packaging there are many useful commands. As I stated in the intro, it’s full of ways to communicate what you are doing with the end user. During an application install, you can show which applications needs to be turned off for the install to work by using the show-installationwelcome command.

Show-InstallationWelcome -CloseApps 'acad,adSSO,AutodeskDesktopApp,AdAppMgrSvc,AdskLicensingService,AdskLicensingAgent,FNPLicensingService' -CloseAppsCountdown 60

This example shows us it wants to close the applications and gives the user a 60 second window to do so. This isn’t the only thing this command can do.

SYNTAX
    Show-InstallationWelcome [-CloseApps <String>] [-Silent] [-CloseAppsCountdown <Int32>] [-ForceCloseAppsCountdown 
    <Int32>] [-PromptToSave] [-PersistPrompt] [-BlockExecution] [-AllowDefer] [-AllowDeferCloseApps] [-DeferTimes 
    <Int32>] [-DeferDays <Int32>] [-DeferDeadline <String>] [-MinimizeWindows <Boolean>] [-TopMost <Boolean>] 
    [-ForceCountdown <Int32>] [-CustomText] [<CommonParameters>]
    
    Show-InstallationWelcome [-CloseApps <String>] [-Silent] [-CloseAppsCountdown <Int32>] [-ForceCloseAppsCountdown 
    <Int32>] [-PromptToSave] [-PersistPrompt] [-BlockExecution] [-AllowDefer] [-AllowDeferCloseApps] [-DeferTimes 
    <Int32>] [-DeferDays <Int32>] [-DeferDeadline <String>] -CheckDiskSpace [-RequiredDiskSpace <Int32>] 
    [-MinimizeWindows <Boolean>] [-TopMost <Boolean>] [-ForceCountdown <Int32>] [-CustomText] [<CommonParameters>]

Other commands like execute-process, will launch processes that you need from the file directory and more. All while logging what’s going on. You can find a full help system for all the unique commands inside the tool kit. Navigate to the tool kit > AppDeployToolkit > AppDeployToolkithelp.ps1 will bring up a gui that allows you to read all about the commands.

Using the Toolkit with Intune

If you want the tool kit to work with the end user profile, then you will need to grab a unique little tool from MDT. We will need the ServiceUI.exe from the MDT software. You can download MDT here. Once you have the MDT installed. we need to pull the ServiceUI.exe out of the MDT install. Navigate to, C:\Program Files\Microsoft Deployment Toolkit\Templates\Distribution\Tools\x64 and copy the ServiceUI.exe file. Place this file in the home of your PowerShell App Deployment Toolkit file structure.

As you can see, the ServiceUI.exe is in the root folder. Now we need to create the package. We can create a win32 app package. I covered this here. This is the same concept.

  • The folder would be the folder with your toolkit
  • The setup file would be the Deploy-Application.exe
  • The output file would be wherever you want the Intune app to be dumped.
  • and we don’t need to catalog the folder.

Once you have your application built, it’s time to see how it works inside Intune. We start by building your application package. As stated in the previous blog, we start the application by uploading. The big difference here is our install and uninstall commands.

Understanding the commands

Our install command will be the using the ServiceUI.exe and the deploy-application.exe

  • Install: ServiceUI.exe -process:explorer.exe Deploy-Application.exe
  • Uninstall: ServiceUI.exe -process:explorer.exe Deploy-Application.exe -DeploymentType “Uninstall” -DeployMode “Interactive”

By default, the Deploy-application.exe will be interactive. There are two flags for the Deploy-Application and here are what they are.

  • DeploymentType: (Super Straight forward)
    • Install: Installs the application
    • Uninstall: Uninstalls the application
    • Repair: repairs the application.
  • DeployMode:
    • Interactive: Shows all of the prompts needed.
    • NonInteractive: Only shows the required prompts.
    • Silent: Shows no prompts.

We can translate the command above by using these flags. By default the Deploy-application.exe is install and interactive. So, we know that the application would be prompted and the end user will see the command. The uninstall command will uninstall and it will be interactive. The ServiceUI.exe allows you to run applications as the user in and the system at the same time. The biggest issue with the ServiceUI.exe is the application will not install until someone logs in. No flags are needed here.

Over all, PSappdeploytoolkit changes the ball game with deployments. I encourage anyone and everyone to dig deeper into it.

What can we learn as a person today?

I live in the south of United states. From time to time I will hear people battling over belief systems. In my life time I have come to an understanding of how these systems work. I liken “objective truth” as fish in a sea. Our belief systems is the net we use to capture those fish. Some nets are better than others. The water of the sea is useless, distracting, or misinformation. It only makes it harder to bring those pieces of the objective truth into ourselves. A good net can capture a lot of fish, and let the water out at the same time. A bad net, like a tarp, captures some but becomes unmanageable due to the water. This is the same way with our beliefs. We are only strong enough to lift so much at different points in our lives.

Premade Nets

I see organized religions as premade nets. Think of it like a tool kit. It’s a format that is easy to use and allows you to do stuff with it. Does the toolkit work for everyone, no. Just like this PowerShell toolkit, it would be useless in a world without powershell. So chromeOS, this toolkit isn’t useful. This is the same with some beliefs. They are useful where they are, but not useful in other places. Sometimes these toolkits/nets, are useful for some but not others. If you don’t know PowerShell, this toolkit wouldn’t be useful to you. If you are shame sensitive, some religions are not for you.

Everyone has their own tool set or net. No single tool set is inherently bad. It’s how we use them and where we use them. If you take a net to a small pond, get ready to waste your time and damage your net. If you throw your net aggressively into a aggressive sea, get ready to lose that net.

Homemade Nets

Once someone understands how the nets are made and how to repair them, It’s always best for them to start building their own nets using the techniques they have used on their previous nets. By having a net/toolset of your own, this allows you to have full knowledge and be able to repair quickly. This belief system would be uniquely yours and different from others. So, when it breaks, you can grow it, replace parts, and more without having any problems. It’s yours and no one elses.

Let’s build our own beliefs.

General Uninstaller for Intune

General Uninstaller for Intune

This past month I was given a task to Uninstall a few applications in Intune. However, the app’s uninstall feature did not work according to plan. However, a bunch of these worked with the cim methods of uninstall. Which I thought was funny. After writing a bunch of the same code over and over again, I decided to write a General Uninstaller for Intune. This also requires a custom Detection Script.

The General Uninstaller Script

param (
    [string[]]$ProductNames
)
$Products = Get-CimInstance -ClassName win32_Product
foreach ($Product in $ProductNames) {
    if ($null -eq ($Products | where-object {$_.name -like "$Product"})) {
        write-host "Success"
        exit 1212
    } else {
        #Grabs the install Location
        $InstallLocation = ($Products | where-object {$_.name -like "Product"}).InstallLocation

        #Uninstalls the product in question
        $Products | where-object {$_.name -like "Product"} | Invoke-CimMethod -MethodName uninstall
        
        if ($Null -ne $InstallLocation) {
            foreach ($Location in $InstallLocation) {
                if (Test-Path $Location) {
                    Remove-Item -Path $Location -Force -Recurse
                }
            }
        }
        exit 1212
    }
}

Here we have a general uninstaller for Intune. This script allows us to feed the product name as is or we can add wild cards to the name. We start off the script by grabbing the product names from the user. This will be done during the intune setup. When it deploys, the first thing this script does is grab all the applications inside the win32_product. If the application didn’t register in this windows system, then this script is going to be pointless for you.

Once we have the products, we go through each Product Name. We first check to see if the product is on the system. If it isn’t, we output success and exit with a unique exit code. This will be used later. However, if the product is on the machine, we grab the install location. Then we pipe the product into the uninstall method in the cim method command. Finally, we see if the install location exists inside the installed object. Some applications give us this information some don’t. Some give us multiple locations while others don’t.

To work around this, we check if install location property is null. If it isn’t null, then we move on and start a loop. The loop is only because some install locations have more than one location. Then we test if the file path still exists. Sometimes, the applications uninstaller will remove the folder, sometimes, it doesn’t and that’s why we test. If the file location is there still, we remove it with a good old force and recurse. Finally, we exit with the unique exit code.

The General Uninstall Detection Script

$ProductNames = "ProductName","Product2Name"
$Products = Get-CimInstance -ClassName win32_Product
foreach ($Product in $ProductNames) {
    if ($null -ne ($Products | where-object {$_.name -like "$Product"})) {    
        exit 1
    } 
}
write-host "Success"
exit 0

With any custom script installs or uninstalls, a custom detection script is necessary. The first step is to grab the product names. Just like before, it’s a list of strings. So they can do more than one. Then we grab all the products with our cim instance and win32_product. Then we loop through each product name and see if the product exists still. If it does, we exit with a 1. This basically says, I failed! Intune needs a string and an exit code of 0 to be successful. The exit of 1 without the string ends the script and without that string, intune assumes failure. However, if we go through them all, and none trigger the exit, then we are safe to exit with a 0 and the beautiful word success.

Building it out in Intune.

Building the IntuneWin File

The first thing you will need to do is save your script into a folder and then download the WinIntuneApp, aka, Win32 Prep Tool, to package up the powershell script. Unpackage this tool and start up your command prompt. The application will guide you through the process of setting up a intunewin app.

General Uninstaller for Intune
  1. Please specify the source folder: This is the folder that will have your script inside of it. If you wanted to create something more complex, this part would change your way of deployment. Future blog post coming.
  2. Please Specify the setup file: This is going to be the powershell name. General-Uninstall.ps1
  3. please specify the output folder: This is the folder that the intunewin file will be dropped.
  4. Do you want to specify catalog folder (Y/N)? This one is for more advanced packages. We can say no to this option for this setup.

Setting Up Intune for Your Uninstaller

Now we have the IntuneWin file. It’s time to setup the intune Deployment. This is where you will be able to add things like the productname to our General Uninstaller for Intune.

  • Navigate to Endpoint Manager
  • Click Apps
  • Click Windows
  • Click Add
  • Click the Select App Package File.
  • Add the General-Uninstall.IntuneWin file.
  • Click ok
  • Change the Name
  • Click the edit Description and add a detailed description for other users. Make sure to provide instructions on what to do with the detection script.
  • The publisher can be your company or in my case self.
  • The gategory is going to be computer management as it is a general uninstaller.
  • Feel free to add any additional information. Link this blog post if you wish for the information URL.
  • Click Next when finished.

The next screen is programing.

  • Install command:
PowerShell.exe -ExecutionPolicy Bypass -File .\General-Uninstall.ps1 -ProductName "*Product Name*"
  • The Uninstall command can be as simple as a removal file.
  • Device Restart Behavior: Determine behavior based on return codes
  • Return Codes: Remember that unique exit code we had in the script. This is where you would place that code. I have it as 1212 is a success.

The next screen the requirement screen. We can do a lot with this screen, but we don’t need to here.

  • Operating System Architecture:
    • 32
    • 64
  • Minimum Operating System: Windows 10 1607.

Now we need to setup the custom detection.

  • Select User A custom Detection Script
  • Validate your product names to be uninstalled.
  • Upload and click next.
  • Accept the defaults for Dependencies and Supersedences.

The final screen is where you are able to assign the script to people. There are three sections. Required, aviable for enrolled devices and uninstall. This is where you will select who is going to get what.

Testing, Monitoring, and deployment

The assignment area is where you assign the script to who you want. This is very important. Here is where you would want to test the users. Have a test group and apply it first. H

  • Deploy the uninstall app to the test device group.
  • Monitor the Intune deployment status for the app to ensure successful deployment to devices/users.
  • Test if the application is still on a target computer. This can be done with control pannel, powershell, and more options.
  • Redefine and correct any issues and restart the testing.
  • Deploy

What can we learn as a person today?

When was the last time you threw a rock? How about a rock in a lakes? The last time you did, did you notice the ripples? Just like a deployment like this can cause ripples in your company, removing things from your life can cause just as many ripples in yourself. Make sure you are ready to let go of that thing you are holding onto. It’s always a good idea to test it out, or have a support group to help you. Those ripples can do some damage. So be ready to Uninstall parts of your life before you do it.

Additional Reading

Get Intune Devices with PowerShell

Get Intune Devices with PowerShell

Recently I was working with a company that gave me a really locked down account. I wasn’t use to this as I have always had some level of read only access in each system. I was unable to create a graph API application either. So, I was limited to just my account. This was a great time to use the newer command lines for graph Api as when you connect to Graph API using the PowerShell module, you inherit the access your account has. So today we will Get Intune Devices with PowerShell and Graph API.

The Script

Function Get-IntuneComputer {
    [cmdletbinding()]
    param (
        [string[]]$Username,
        [switch]$Disconnect
    )
    begin {

        #Connects to Graph API

        #Installs the Module
        if ($null -eq (Get-Module Microsoft.Graph.Intune)) {Install-module Microsoft.Graph.Intune}

        #Imports module
        Import-Module Microsoft.Graph.Intune

        #Tests current Connection with a known computer
        $test = Get-IntuneManagedDevice -Filter "deviceName eq 'AComputer'"

        #If the test is empty, then we connect
        if ($null -eq $test) {Connect-MSGraph}
    }
    process {

        #Checks to see if the username flag was used
        if ($PSBoundParameters.ContainsKey('Username')) {
            #if it was used, then we go through each username get information
            $ReturnInfo = foreach ($User in $Username) {
                Get-IntuneManagedDevice -Filter "userPrincipalName eq '$User'" | select-object deviceName,lastSyncDateTime,manufacturer,model,isEncrypted,complianceState
            }
        } else {
            
            #Grabs all of the devices and simple common information. 
            $ReturnInfo = Get-IntuneManagedDevice | Get-MSGraphAllPages | select-object deviceName,lastSyncDateTime,manufacturer,model,isEncrypted,complianceState,userDisplayName,userPrincipalName
        }
    }    
    end {

        #Returns the information
        $ReturnInfo

        #Disconnects if we want it. 
        if ($Disconnect) {Disconnect-MgGraph}
    }
}

The Breakdown

Parameters

We enter the script with the common parameters. Command let binding flag. This gives us additional parameters like verbose. Next, we have a list of strings called Username. We are using a list of strings to allow us to have multiple inputs. Doing this, we should be able to use a list of usernames and get their Intune Device information. Note that this is a multiple input parameter, thus, you will need to deal with it with a loop later. Next is the Disconnect switch. It’s either true or not. By default, this script will keep connected to Intune.

Connecting to Intune

Next we will connect to the Intune system. First, we need to check and install the module. We check the install by using the get-module command. We are looking for the Microsoft.Graph.Intune module. If it doesn’t exist, we want to install it.

if ($null -eq (Get-Module Microsoft.Graph.Intune)) {Install-module Microsoft.Graph.Intune}

If the module does exist, we will simply skip the install and move to the import. We will be using the importing the same module

Import-Module Microsoft.Graph.Intune

Afterwards, We want to test the connection to Microsoft Intune. The best way to do this is to test a command. You can do it however you want. I am testing against a device that is in Intune.

$test = Get-IntuneManagedDevice -Filter "deviceName eq 'AComputer'"

We will be using this command later. Notice the filter. We are filter the deviceName here. Replace the ‘AComputer’ with whatever you want. If you want to use another command, feel free. This was the fastest command that tested. The above command will produce a null response if you are not connect. Thus, we can test, $test with an if statement. If it comes back with information, we are good, but if it is null, we tell it to connect.

if ($null -eq $test) {Connect-MSGraph}

Get Intune Devices with PowerShell

Now it’s time to Get Intune Devices with PowerShell. The first thing we check to see is if we used a username parameter. We didn’t make this parameter mandatory to give the script flexibility. Now, we need to code for said flexibility. If the command contained the Username flag, we want to honor that usage. We do this with the PowerShell Bound Parameters. The PowerShell Bound Parameters is the that come after the command. We are looking to see if it contains a key of username. If it does, we want to grab the needed information with the username. While if it doesn’t, we grab everything.

if ($PSBoundParameters.ContainsKey('Username')) {
    #Grab based on username
} else {
    #get every computer
}

As we spoke about the list of string parameter needing a loop, this is where we are going to do that. We first create a foreach loop of users for the username. Here, the we will dump the gathered information into a Return variable of $ReturnInfo. Inside our loop, we gather the requried information. The Get-IntuneManagedDevice command filter will need to use the userPrincipalName. These filters are string filters and not object filters. Thus, the term like will cause issues. This is why we are using the equal term.

Now, if we are not searching the Username, we want to grab all the devices on the network. This way if you run the command without any flags, you will get information. Here, we use the Get-IntuneManagedDevice followed by the Get-MSGraphAllPages to capture all the pages in question.

if ($PSBoundParameters.ContainsKey('Username')) {
            $ReturnInfo = foreach ($User in $Username) {
                Get-IntuneManagedDevice -Filter "userPrincipalName eq '$User'"
            }
        } else {
            $ReturnInfo = Get-IntuneManagedDevice | Get-MSGraphAllPages 
        }

Ending the Script

Now it’s time to end the script. We want to return the information gathered. I want to know some basic information. The commands presented produces a large amount of data. In this case we will be selecting the following:

  • DeviceName
  • LastSyncDateTime
  • Manufacturer
  • Model
  • isEncrypted
  • ComplianceState
  • UserDisplayName
  • UserPrincipalName
$ReturnInfo | select-object deviceName,lastSyncDateTime,manufacturer,model,isEncrypted,complianceState,userDisplayName,userPrincipalName

Finally, we test to see if we wanted to disconnect. A simple if statement does this. If we choose to disconnect we run the Disconnect-MgGraph command.

if ($Disconnect) {Disconnect-MgGraph}

What can we learn as a person

In PowerShell, we can stream line the output that we get. Often times commands like these produce a lot of useless but useful information. It’s not useful at the moment. This is like our work enviroment. I use to be a big advacate of duel, and not more screens. I would often have 5 things going on at once. My desk use to have everything I needed to quickly grab and solve a personal problem. For example, my chapstick sat on my computer stand. My water bottle beside the monitor. Papers, sticky notes, and more all scattered accross my desk. I wondered why I couldn’t focus. Our brains are like batteries. How much focus is the charge. Our brains take in everything. Your brain notices the speck of dirt on the computer monitor and the sticky note, without your password on it, hanging from your monitor. This takes your charge.

Having two monitors is great and I still use two. However, I have a focused monitor and a second monitor for when I need to connect to something else. At some point I will get a larger wider monitor and drop the second one all together. Having less allows your brain to grab more attention on one or two tasks. Someone like myself, I have more than one task going at any moment. That’s ok with my brain. Let’s use our Select-object command in real life and remove the distractions from our desks.

Additional Readings

A Guide to Subject Matter Expert Tickets

A Guide to Subject Matter Expert Tickets

In the intricate ecosystem of IT support, the quality of communication in ticket submissions can significantly influence the efficiency of problem resolution. Imagine walking into a dense forest, each tree representing a different issue or ticket awaiting resolution. Just as a seasoned guide can navigate these woods with ease, providing clear paths and descriptions, a Subject Matter Expert (SME) in IT can illuminate the way to swift solutions with well-crafted tickets.

The Spectrum of Ticket Details

Venture into the thicket of daily IT support tickets, and you’ll encounter a wide array of communication styles. On one end, there are tickets like faint, barely noticeable trails – vague, minimal details offered by users unsure of what information is pertinent. Bob from manufacturing, for example, might simply state, “My computer won’t turn on,” leaving the path to resolution obscured by underbrush.

Contrastingly, tickets from more technically adept users, like Jan from accounting, are akin to well-trodden paths through the forest, marked by signs and clear directions. Jan not only mentions reinserting cables and attempting to power on her computer but also notes the absence of the usual boot-up text, laying breadcrumbs for IT support to follow towards a solution.

Cafting a Map to Resolution

Subject Matter Experts (SMEs) stand as the rangers of this forest, armed with the knowledge and tools to guide others through even the densest undergrowth. Here’s how they can effectively chart the course:

  • Know Your Audience: Just as a ranger alters their guidance based on the experience of the hikers, SMEs should tailor their ticket submissions to the technical level of the IT support team. This ensures that the instructions are neither too complex for general support staff nor too simplistic for specialists.
  • Use a Structured Format: A structured ticket is like a map, offering a clear overview of the terrain at a glance. By organizing the issue, steps taken, and potential solutions logically, SMEs create a guide that others can follow easily, avoiding unnecessary detours.
  • Prioritize Clarity and Brevity: In the dense forest of IT issues, clarity acts as a beacon, guiding the support team directly to the heart of the problem. SMEs should aim to illuminate the path with precise, concise language, ensuring no one gets lost in unnecessary details.
  • Offer Potential Solutions: Suggesting solutions or workarounds is akin to marking potential paths on a map. While not all may lead directly to the destination, they provide starting points, accelerating the journey towards resolution.
  • Include Visuals When Necessary: Sometimes, the most effective way to describe a landscape is through visuals. Diagrams, screenshots, and videos can serve as snapshots of the issue, offering immediate context and understanding.
  • Encourage Open Communication: Ending a ticket with an invitation for questions is like leaving a trail of markers for others to follow, ensuring that if the path becomes unclear, further guidance is just a call away.

Navigating the Forest Together

In the realm of IT, “Subject Matter Expert Tickets” are more than just requests for assistance; they’re opportunities for SMEs to lead by example, demonstrating how detailed, well-structured communication can streamline the resolution process. It’s about creating a collaborative environment where every ticket, like a trail in the forest, is clearly marked and navigable, leading to a more efficient, effective IT support system.

By adopting these strategies, SMEs not only enhance their own credibility but also contribute to a culture of clarity and cooperation, ensuring that the vast forest of IT support is a little easier for everyone to navigate.

WordPress in Docker with LDAP

WordPress in Docker with LDAP

A few weeks ago, we built WordPress in Docker. Today I want to go deeper into the world of docker. We will be working with a single WordPress instance, but we will be able to expand this setup beyond what is currently there over time. Unlike last time we will be self-containerizing everything and adding plugins along with the LDAP php which doesn’t natively come with the WordPress:Latest image. It’s time to build an WordPress in Docker with LDAP.

Docker Files

As we all know docker uses compose.yml files for it’s base configuration. This file processes the requested image based on the instructions in the compose. Last time we saw that we could mount the wp-content to our local file system to edit accordingly. The compose handles that. This time we are going about it a little differently. The compose file handles the configuration of basic items like mounting, volumes, networks, and more. However, it can’t really do much in the line of editing a docker image or adding to it. The compose file has the ability to call upon a build command.

services:
  sitename_wp:
    build:
      context: .
      dockerfile: dockerfile

The build is always within the service that you want to work with. the Context here is the path of the build. This is useful if you have the build files somewhere else like a share. Then the dockerfile will be the name of the build. I kept it simple and went with docker file. This means there are now two files. the docker-compose.yml and this dockerfile.

What are the Dockerfile

The docker file takes an image and builds it out. It has some limitations. The dockerfile can add additional layers that adds to the over all size of the image. Non-persistence is the next problem, by it’s ephemeral nature, it disappears after it’s first use. The file can only do a single threaded execution. Thus, it can’t handle multiple things at once. It’s very liner in it’s nature. If than, and other structures are not present in the docker file. This makes it hard for it to be a programing language. There are limits to versioning.

The docker file cannot work with networking or ports. There is no user management inside the dockerfile process. Complexity is a big problem with these files as the more complex, the harder it is to maintain. Never handle passwords inside the dockerfile. The docker file can’t handle environmental variables. The thing that hit me the hardest, limited apt-get/yum commands. Build context is important as dockerfiles can slow down performance. Finally, dockerfile’s may not work on all hosts.

With those items out of the way, docker files can do a lot of other good things like layering additional items to a docker image. The container treats these files as root and runs them during the build. This means you can install programs, move things around and more. It’s time to look at our dockerfile for our WordPress in Docker with LDAP.

The Dockerfile

# Use the official WordPress image as a parent image
FROM wordpress:latest

# Update package list and install dependencies
RUN apt-get update && \
    apt-get install -y \
        git \
        nano \
        wget \
        libldap2-dev

# Configure and install PHP extensions
RUN docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ && \
    docker-php-ext-install ldap

# Clean up
RUN rm -rf /var/lib/apt/lists/*

# Clone the authLdap plugin from GitHub
RUN git clone https://github.com/heiglandreas/authLdap.git /var/www/html/wp-content/plugins/authLdap

# Add custom PHP configuration
RUN echo 'file_uploads = On\n\
memory_limit = 8000M\n\
upload_max_filesize = 8000M\n\
post_max_size = 9000M\n\
max_execution_time = 600' > /usr/local/etc/php/conf.d/uploads.ini

The Breakdown

Right off the bat, our FROM calls down the wordpress:latest image. This is the image we will be using. This is our base layer. Then we want to RUN our first command. Run commands like to have the same commands. Remember, every command is ran as the container’s root. The first RUN command will contain two commands. The APT-Get Update and the install. We are installing git, this way we can grab a plugin, nano, so we can edit files, wget, for future use and our php ldap.

apt-get update &&\
apt-get install -y git nano wget libldap2-dev

Please notice the && \. The \ means to treat the next line as part of this command. The && means and. The && allows you to run mulitple commands on the same line. Since each RUN is a single line, this is very important. The libldap2-dev is our ldap plugin for php. Our next RUN edits the docker php extension.

The Run Commands

RUN docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ && \
docker-php-ext-install ldap

docker-php-ext- * is a built in scripts to our WordPress image. We tell the configure where our new libraries are located for the PHP. Then we tell php to install the ldap plugin. After we have it installed, we need to do some clean up with the next RUN command.

rm -rf /var/lib/apt/lists/*

At this point, we have WordPress in Docker with LDAP php modules. Now I want a cheap easy to use plugin for the ldap. I like the authldap plugin. We will use the git command that we installed above and clone the repo for this plugin. Then drop that lpugin into the WordPress plugin folder. This is our next RUN command.

git clone https://github.com/heiglandreas/authLdap.git /var/www/html/wp-content/plugins/authLdap

In our previous blog, we used a printf command to make a upload.ini file. Well, we don’t need that. You can do this here. We trigger our final RUN command. This time it will be echo. Echo just says stuff. So we echo all the PHP settings into our uploads.ini within the image.

# Add custom PHP configuration
RUN echo 'file_uploads = On\n\
memory_limit = 8000M\n\
upload_max_filesize = 8000M\n\
post_max_size = 9000M\n\
max_execution_time = 600' > /usr/local/etc/php/conf.d/uploads.ini

Docker Compose

Now we have our Dockerfile built out. It’s time to build out our new docker compose file. Here is the compose file for you to read.

version: '3.8'

services:
  sitename_wp:
    build:
      context: .
      dockerfile: dockerfile
    ports:
      - "8881:80"
      - "8882:443"
    environment:
      WORDPRESS_DB_HOST: sitename_db:3306
      WORDPRESS_DB_USER: ${WORDPRESS_DB_USER}
      WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
      WORDPRESS_DB_NAME: ${MYSQL_DATABASE}
      WORDPRESS_AUTH_KEY: ${WORDPRESS_AUTH_KEY}
      WORDPRESS_SECURE_AUTH_KEY: ${WORDPRESS_SECURE_AUTH_KEY}
      WORDPRESS_LOGGED_IN_KEY: ${WORDPRESS_LOGGED_IN_KEY}
      WORDPRESS_NONCE_KEY: ${WORDPRESS_NONCE_KEY}
      WORDPRESS_AUTH_SALT: ${WORDPRESS_AUTH_SALT}
      WORDPRESS_SECURE_AUTH_SALT: ${WORDPRESS_SECURE_AUTH_SALT}
      WORDPRESS_LOGGED_IN_SALT: ${WORDPRESS_LOGGED_IN_SALT}
      WORDPRESS_NONCE_SALT: ${WORDPRESS_NONCE_SALT}
    volumes:
      - sitename_wp_data:/var/www/html
    depends_on:
      - sitename_db
    networks:
      - sitename_net_wp

  sitename_db:
    image: mysql:5.7
    volumes:
      - sitename_wp_db:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    networks:
      - sitename_net_wp

networks:
  sitename_net_wp:
    driver: bridge

volumes:
  sitename_wp_data:
  sitename_wp_db:

The WordPress in Docker with LDAP breakdown

First thing first, Notice everywhere you see the name “sitename”. To use this docker correctly, one must replace that information. This will allow you to build multiple sites within their own containers, networks and more. As stated before, the first thing we come accross is the build area. This is where we tell teh system where our dockerfile lives. Context is the path to the file in question and dockerfile is the file above.

Next, is the ports. We are working with port 8881:80. This is where you choose the ports that you want. The first number is the port your system will reach out to, the second number is the port that your container will understand. Our SSL port is 8882 which is the standard 443 on the containers side.

ports:
      - "8881:80"
      - "8882:443"

Next are the enviromental veriables. If you notice, some of the items have ${codename} instead of data. These are veriables that will pull the data externally. This approach prevents embedding the codes inside the compose file. The volume is the next part of this code. Instead of giving a physical location, we are giving it a volume. Which we will declare later. Next, we state the wordpress page is dependant on the mysql image. Finally, we select a network to tie this container to. The process is the same for the database side.

Finally, we declare our network with the networks. This network will have it’s own unique name, as you see the sitename is within the network name. We set this network to bridge, allowing access from the outside world. Finally we declare our volumes as well.

The hidden enviromental file

The next file is the enviromental file. For every ${codename} inside the docker, we need an envorimental veriable to match it. Some special notes about the salts for WordPress. The unique symbols, such as a $ or an =, in the code injection cause the docker to break down. It is wise to use numbers and letters only. Here is an example:

WORDPRESS_DB_USER=sitename_us_wp_user
WORDPRESS_DB_PASSWORD=iamalooserdog
MYSQL_ROOT_PASSWORD=passwordsareforloosers
MYSQL_DATABASE=sitename_us_wp_db
MYSQL_USER=sitename_us_wp_user
MYSQL_PASSWORD=iamalooserdog
WORDPRESS_AUTH_KEY=4OHEG7ZKzXd9ysh5lr1gR66UPqNEmCtI5jjYouudEBrUCMtZiS1WVJtyxswfnlMG
WORDPRESS_SECURE_AUTH_KEY=m0QxQAvoTjk6jzVfOa8DexRyjAxRWoyq08h1fduVSHW0z2o4NU2q7SjKoUvC3cJz
WORDPRESS_LOGGED_IN_KEY=9LxfBFJ5HyAtbrzb0eAxFG3d9DNkSzODHmPaY6kKIsSQDiVvbkw0tC71J98mDdWe
WORDPRESS_NONCE_KEY=kKMXJdUTY0b6xZy0bLW9YALpuNHcZfow6lDZbRqqlaNPmsLQq45RhKdCNPt34fai
WORDPRESS_AUTH_SALT=IFt5xLir4ozifs9v8rsKTxZBFCNzVWHrpPZe8uG0CtZWTqEBhh9XLqya4lBIi9dQ
WORDPRESS_SECURE_AUTH_SALT=DjkPBxGCJ14XQP7KB3gCCvCjo8Uz0dq8pUjPB7EBFDR286XKOkdolPFihiaIWqlG
WORDPRESS_LOGGED_IN_SALT=aNYWF5nlIVWnOP1Zr1fNrYdlo2qFjQxZey0CW43T7AUNmauAweky3jyNoDYIhBgZ
WORDPRESS_NONCE_SALT=I513no4bd5DtHmBYydhwvFtHXDvtpWRmeFfBmtaWDVPI3CVHLZs1Q8P3WtsnYYx0

As always, grab your salts from an offical source if you can make it work, Here is the WordPress Official source site. You can also use powershell to give you a single password, take a look here. Of course, replace everything in this file with your own passwords you wish. If you have the scripting knowledge, you can auto-generate much of this.

Bring Docker to Life

Now we have all of our files created. It’s finally time to bring our creation to life. Run the following command:

docker compose up -d

If you notice, there are additional information that appears. The dockerfile will run and you can watch it as it runs. if there are errors, you will see them here. Often times, the erros will be syntex issues. Docker is really good at showing you what is wrong. So, read the errors and try finding the answer online.

What can we learn as a person today?

Men are born soft and supple; dead, they are stiff and hard. Plants are born tender and pliant; dead, they are brittle and dry. Thus whoever is stiff and inflexible is a disciple of death. Whoever is soft and yielding is a disciple of life. The hard and stiff will be broken. The soft and supple will prevailLao Tzu

In seeking assistance from forums like the sysadmin subreddit or Discord channels, I often encounter rigid advice, with people insisting on a singular approach. This rigidity echoes Lao Tzu’s words: “Men are born soft and supple; dead, they are stiff and hard… The hard and stiff will be broken. The soft and supple will prevail.” In professional settings, flexibility and adaptability are crucial. Entering a new company with an open mindset, ready to consider various methods, enables us to navigate around potential obstacles effectively. Conversely, inflexibility in our career, adhering strictly to one method, risks stagnation and failure. Embracing adaptability is not just about avoiding pitfalls; it’s about thriving amidst change. Lao Tzu’s wisdom reminds us that being pliant and receptive in our careers, much like the living beings he describes, leads to resilience and success.

AD User Audit with PowerShell

AD User Audit with PowerShell

In the intricate web of modern network management, the security and integrity of user accounts stand paramount. “AD User Audit with PowerShell” isn’t just a technical process; it’s a critical practice for any robust IT infrastructure. Why, you ask? The answer lies in the layers of data and accessibility that each user account holds within your organization’s Active Directory (AD). Whether it’s for compliance, security, or efficient management, auditing user accounts is akin to a health check for your network’s security posture.

Understanding the last login times, password policies, and group memberships doesn’t only highlight potential vulnerabilities; it also paves the way for proactive management and policy enforcement. PowerShell, with its powerful scripting capabilities, emerges as an indispensable tool in this endeavor. It transforms what could be a tedious manual audit into an efficient, automated process.

Why Perform an AD User Audit?

An “AD User Audit with PowerShell” serves multiple purposes:

  1. Security: Identifying inactive accounts or those with outdated permissions reduces the risk of unauthorized access.
  2. Compliance: Regular audits ensure adherence to internal and external regulatory standards.
  3. Operational Efficiency: Understanding user activities and configurations aids in optimizing network management.

The Script:

Import-Module ActiveDirectory

Function Get-PasswordExpiryDate {
    param (
        [DateTime]$PasswordLastSet,
        [System.TimeSpan]$MaxPasswordAge
    )
    return $PasswordLastSet.AddDays($MaxPasswordAge.TotalDays)
}

$MaxPasswordAge = (Get-ADDefaultDomainPasswordPolicy).MaxPasswordAge

Get-ADUser -Filter * -Properties Name, samaccountname, Enabled, LastLogonDate, PasswordLastSet, MemberOf, LockedOut, whenCreated | 
Select-Object Name, samaccountname, Enabled, LastLogonDate, PasswordLastSet, @{Name="PasswordExpiryDate"; Expression={Get-PasswordExpiryDate -PasswordLastSet $_.PasswordLastSet -MaxPasswordAge $MaxPasswordAge}}, @{Name="GroupMembership";Expression={($_.MemberOf | Get-ADGroup | Select-Object -ExpandProperty Name) -join ", "}}, LockedOut, whenCreated | 
Export-Csv -Path "c:\temp\AD_User_Audit_Report.csv" -NoTypeInformation

The Breakdown

The first thing we do in this powershell script is the import module. We are importing the ActiveDirectory. Then we are going to setup a nice little function to help grab the expiry date of the account. The function here will depend on the dates and time as the parameters. The function resolves the time span needed. Next we will grab the data we need. That’s the Name, Samaccountname, enabled, last logon date, password last set, member of, locked out, and finally when created. From there we will use select object. This is where some of the good stuff happens. When we select the pasword expire date, we are going to use the @ name and expression. Lets break that down some.

Special Select

@{Name="PasswordExpiryDate"; Expression={Get-PasswordExpiryDate -PasswordLastSet $_.PasswordLastSet -MaxPasswordAge $MaxPasswordAge}},

Inside the select object command, you can select different properties and give custom properties. Having a name or label is the first part. The Expression can be it’s own command line. Here we have the function from earlier. We are pushing the get-aduser’s password last set and running it through the get-passwordexpirydate. Make sure to wrap the expression inside curly brackets. We can do this same thing with the group membership.

The Group Membership pulls apart the memberof and passes it through the get-adgroup. From there we grab the name of the group. Now since we want to export this into a csv, it would be best to use a join to make it easy. Once we have that information setup. We export it to a csv. Then we are done. This information can then be passed on to the security admin to work with.

Script Deployment and Usage

The execution of this script in your environment is straightforward but requires administrative privileges. It’s designed to run seamlessly, outputting a comprehensive report that encapsulates the health and status of each user account in your Active Directory. This report not only informs your immediate security strategies but also assists in long-term IT planning and policy development.

What can we learn as a person today?

Just as “AD User Audit with PowerShell” ensures the health of an IT infrastructure, journaling serves as a personal audit. It’s a practice that offers self-reflection, tracking progress, and identifying areas for improvement in our personal lives.

Self-Reflection

Journaling is a gateway to the soul, a mirror reflecting our deepest thoughts and feelings. In the quiet moments with our journal, we engage in an intimate conversation with ourselves. This practice allows us to confront our fears, celebrate our successes, and ponder our aspirations. It’s a safe space where we can express emotions without judgment, helping us understand and accept who we are.

Imagine a day filled with stress and uncertainty, where words left unsaid create a storm inside you. In your journal, these unspoken thoughts find a voice. You write about the meeting that didn’t go as planned, the conversation with a friend that left you unsettled. As the words flow, there’s a sense of unburdening, a weight lifting off your shoulders. It’s in this self-reflection that clarity emerges, often bringing peace and a newfound understanding of your emotional landscape.

Tracking Progress

Like a lighthouse guiding ships through a stormy sea, journaling illuminates our path through life’s complexities. It helps us track where we’ve been and where we’re heading, offering insights into our personal evolution. By regularly documenting our experiences, thoughts, and feelings, we create a tangible record of our journey. This practice can reveal patterns in our behavior and responses, helping us recognize both our strengths and areas for growth.

Consider a journal entry from a year ago, describing feelings of apprehension about starting a new venture. Fast forward to today, and your journal tells a different story—one of growth, learning, and newfound confidence. The contrast between then and now is stark, yet it’s a powerful testament to your resilience and adaptability. Reading your past entries, you feel a surge of pride in how far you’ve come, reinforcing your belief in your ability to face future challenges.

Identifying Areas for Improvement

Journaling not only captures our current state but also acts as a compass, pointing out the areas in our life that need attention. It can highlight recurring issues, whether in relationships, work, or personal habits, prompting us to seek solutions or make necessary changes. This process of self-audit through journaling encourages us to be honest with ourselves, to confront uncomfortable truths, and to take actionable steps towards betterment.

Imagine writing about the same problem repeatedly—a strained relationship with a colleague or a persistent feeling of dissatisfaction. Seeing these recurring themes in your journal can be an eye-opener, signaling a need for change. It might inspire you to initiate a difficult conversation, seek external advice, or explore new opportunities. The journal becomes a catalyst for transformation, nudging you towards decisions and actions that align with your true desires and values.

Final Thoughts

In each stroke of the pen, journaling offers a chance for introspection, growth, and healing. It’s our personal audit, tracking our emotional health and guiding us towards a more mindful and fulfilling life. Just as “AD User Audit with PowerShell” ensures the health of our digital environments, journaling safeguards the wellbeing of our inner world.

Additional Resources