Hi there! Have you ever scratched your head and wondered if you loaded software the right way? You’re not by yourself. This gives a lot of system administrators a headache. This is especially hard to do when handling programs like AutoCAD 2022 in a variety of settings. That is where Microsoft Intune really shines. The fact that you can use your own recognition scripts makes it very useful. A custom Intune detection script is key.
These scripts save my life a lot. They help you check every network gadget. This makes sure that not only is there an app, but it’s also the right version for you. Today, we’re going to look in detail at a PowerShell script that can find AutoCAD 2022. This guide will help make your business life a little easier, no matter how much you know about Intune or how new you are to it. Allow us to begin on our Intune detection script!
How do I make a Intune Detection Script?
First, what does a custom Intune recognition script really mean? It’s just a script for your control tool for Microsoft Intune. It checks automatically to make sure that all of your devices have the same version of software loaded. What makes this cool? Because it takes care of one of the most boring jobs in IT management automatically. Imagine making sure that software is compliant and installations are correct without having to check each machine by hand. Not interested!
PowerShell is used to make custom scripts like the one we’re talking about today. It is a strong programming language that can do a lot with just a few lines of code. These scripts can get into the Windows Registry, find loaded programs, and check out different versions of installed programs. It’s not just about saving time; it’s also about making sure that your software deployments work well and stay stable. We all hate those crazy support calls, but this cuts down on them.
The Breakdown
Getting into the nitty-gritty of our PowerShell script, let’s break it down line by line. This will help you understand exactly what each part does. Let’s get our geek on!
Lets go line by line in our Intune Detection script and break it down.
Line 1-2: Define the Product
These two lines allow you to define the product you want to search for and the Version you wish to check for. The product name can take wild cards, but I don’t suggest it as it can cause more conflicts than be helpful.
The next line is where we look in the registry for the uninstall strings and product information. These registry keys is what win32_product looks at to get information. Thus, it’s much faster than using the win32_product.
Here, we’re grabbing a list of all items in the paths defined earlier. It’s akin to gathering all the potential treasure chests. We will use these magical coins later to get what we need.
$apps = Get-ChildItem -Path $RegPath
Lines 5 – 7: filter and test for the product
In these lines, we loop through each app and check if it matches our product name. If it does, we take a closer look at its properties. Here we are checking for our gold coins vs the silver coins. We take each of the products we want, and put it into our test varaible, or chest.
Assuming you have chosen a name that will only show up once, we now check to see if the version matches up. If it does, then we say, yep, it’s installed and exit with a code of ZERO, the big 0. If it doesn’t, then we exit with the error code of 1. This is important as Intune is looking for a string and an error code of 0 for success.
Intune’s custom detection script deployment requires more than copying and pasting code. Ensure the script operates smoothly on all targeted devices. Step-by-step instructions:
The first step in script preparation is to test it locally. You shouldn’t distribute something without testing on your own machines.
Put the script in Intune:
Enter the Microsoft Endpoint Manager admin center.
Select Windows 10 under Devices > Scripts > Add.
PowerShell script upload and settings. This involves choosing a system or user context for the script based on access level.
Assign script:
After uploading your script, assign it to device groups. You can choose groups by organizational units or other deployment parameters.
Monitor script deployment:
Monitor script execution on the script profile’s Device Status and User Status tabs after deployment. This shows if the script is executing properly or if any devices are failing.
Update as needed:
Monitoring feedback may need script or deployment parameters changes. Maintaining compatibility with new system updates or IT environment changes may need regular updates.
Effective script deployment guarantees that all network devices meet software standards. Assuring all machine parts are well-oiled and working together.
Common Issues and Troubleshooting Tips for a Intune Detection Script
Even with the best preparation, things might not always go as planned. Here are some common issues you might face with custom Intune scripts and how to troubleshoot them:
Script Fails to Execute:
Check Execution Policy: Ensure that the script’s execution policy allows it to run. This policy can sometimes block scripts if not set to an appropriate level.
Review Script Permissions: Make sure the script has the necessary permissions to access the registry paths or any other resources it uses.
Incorrect Script Output:
Verify Script Logic: Double-check your script’s logic. Look for typos in variable names or incorrect operators in conditions.
Test Locally: Always run the script locally on a test machine before deploying it to avoid simple errors.
Issues with Script Deployment:
Assignment Errors: Make sure the script is assigned to the correct device groups. Incorrect assignments can lead to the script not being run where it’s needed.
Check Intune Logs: Use the logs provided by Intune to identify what’s going wrong when the script runs.
Troubleshooting is an integral part of managing scripts in a large environment. It’s a little like detective work, where you need to keep a keen eye on clues and sometimes think outside the box.
What can we learn as a person today?
Even though we don’t always mean it that way, we frequently execute “scripts” in our day-to-day lives, much like a PowerShell script checks for certain conditions before proclaiming success or failure. These are the things we do on a regular basis without thinking, like automated checks on a computer system; they help us evaluate and respond to the many opportunities and threats that life presents.
When we look for patterns in our own lives, we can see what’s working and what isn’t. By exercising first thing in the morning, for instance, you may find that you get more done that day. This would be an example of a positive pattern, like a script that verifies everything is going according to plan. In contrast, if you find yourself feeling low after a session of social networking, it’s a sign that something needs to be changed, similar to a script fault.
It is essential to listen to environmental feedback in order to make modifications. Our emotional and physiological responses, the opinions of others around us, and the outcomes we attain can all serve as sources of this type of feedback. Like adjusting a screenplay that isn’t working as planned, when our life’s routines bring about less ideal consequences, it’s a warning to halt and re-calibrate. Perhaps it necessitates reevaluating our current habits and deciding how much time is best spent on specific pursuits.
The idea is to embrace learning and refining as a process, just like scripts that are updated over time. There is no instruction manual for life, and sometimes the only way to learn is by making mistakes. Being self-aware and willing to make adjustments for the better is more important than striving for perfection.
Over the years of Intune deployments, I have searched for a way to let my end users know that an application is being installed or uninstalled from their computer. I have used things ranging from notification bubbles to blanking a screen. All of these methodologies are poor at best. I found a few paid items that companies just didn’t want to pay for due to the insanity of the pricing. For example, one company wanted us to pay 150 USD per deployment. Times that by 1500 devices, that adds up quickly. It wasn’t until I found the PowerShell App Deployment Toolkit that I found what I was finally looking for.
What is the PowerShell App Deployment Toolkit?
This toolkit is an immensely powerful and amazingly simple setup. You can download the tool kit here. It provides a framework to install and uninstall applications using PowerShell through a signed application. This allows us to deploy complex and confusing deployments as a single package. A good example would be AutoCAD. Recently, I was tasked with standardizing AutoCAD in a single department. Some members used AutoCAD 2016, some used 2024. This was a problem as the 2024 files did not work with the 2016 AutoCAD. Thus, I needed to uninstall the previous versions of AutoCAD before I installed the current version. As all files are backed up, I didn’t have to worry about them losing any files. The toolkit was perfect for this.
Key items I like of the toolkit
Simple packaging
Many application toolkits come with complex packaging. It’s normally an application that wraps itself around another application that keeps doing this until it’s all transparent. With the PowerShell App Deployment Toolkit, all you need to interact with is the deploy-application.ps1 file. That’s assuming you are doing more than an a MSI file. If you are only using an MSI file, all you need to do is drop the file in.
As you can see in the screen shot, this is the package. When you download the zip file, you will be greeted with this amazing structure. The Deploy-Application.ps1 is where our code will go. The Files folder is where the installer files would go. Following our auto cad example, the installer and updates would all be placed inside the Files folder.
Deploy-Application.ps1
This file has an amazing setup. It first has a wall of documentation inside the file itself. The file explains each step along the way. It is broken up into installation, uninstallation, and repair. Each section has a Pre, during and post process in each section. This is great if you need to kill some services, send a message or more. It’s also helpful because it gives you a structure to work within.
The Commands
Inside this packaging there are many useful commands. As I stated in the intro, it’s full of ways to communicate what you are doing with the end user. During an application install, you can show which applications needs to be turned off for the install to work by using the show-installationwelcome command.
This example shows us it wants to close the applications and gives the user a 60 second window to do so. This isn’t the only thing this command can do.
Other commands like execute-process, will launch processes that you need from the file directory and more. All while logging what’s going on. You can find a full help system for all the unique commands inside the tool kit. Navigate to the tool kit > AppDeployToolkit > AppDeployToolkithelp.ps1 will bring up a gui that allows you to read all about the commands.
Using the Toolkit with Intune
If you want the tool kit to work with the end user profile, then you will need to grab a unique little tool from MDT. We will need the ServiceUI.exe from the MDT software. You can download MDT here. Once you have the MDT installed. we need to pull the ServiceUI.exe out of the MDT install. Navigate to, C:\Program Files\Microsoft Deployment Toolkit\Templates\Distribution\Tools\x64 and copy the ServiceUI.exe file. Place this file in the home of your PowerShell App Deployment Toolkit file structure.
As you can see, the ServiceUI.exe is in the root folder. Now we need to create the package. We can create a win32 app package. I covered this here. This is the same concept.
The folder would be the folder with your toolkit
The setup file would be the Deploy-Application.exe
The output file would be wherever you want the Intune app to be dumped.
and we don’t need to catalog the folder.
Once you have your application built, it’s time to see how it works inside Intune. We start by building your application package. As stated in the previous blog, we start the application by uploading. The big difference here is our install and uninstall commands.
Understanding the commands
Our install command will be the using the ServiceUI.exe and the deploy-application.exe
By default, the Deploy-application.exe will be interactive. There are two flags for the Deploy-Application and here are what they are.
DeploymentType: (Super Straight forward)
Install: Installs the application
Uninstall: Uninstalls the application
Repair: repairs the application.
DeployMode:
Interactive: Shows all of the prompts needed.
NonInteractive: Only shows the required prompts.
Silent: Shows no prompts.
We can translate the command above by using these flags. By default the Deploy-application.exe is install and interactive. So, we know that the application would be prompted and the end user will see the command. The uninstall command will uninstall and it will be interactive. The ServiceUI.exe allows you to run applications as the user in and the system at the same time. The biggest issue with the ServiceUI.exe is the application will not install until someone logs in. No flags are needed here.
Over all, PSappdeploytoolkit changes the ball game with deployments. I encourage anyone and everyone to dig deeper into it.
What can we learn as a person today?
I live in the south of United states. From time to time I will hear people battling over belief systems. In my life time I have come to an understanding of how these systems work. I liken “objective truth” as fish in a sea. Our belief systems is the net we use to capture those fish. Some nets are better than others. The water of the sea is useless, distracting, or misinformation. It only makes it harder to bring those pieces of the objective truth into ourselves. A good net can capture a lot of fish, and let the water out at the same time. A bad net, like a tarp, captures some but becomes unmanageable due to the water. This is the same way with our beliefs. We are only strong enough to lift so much at different points in our lives.
Premade Nets
I see organized religions as premade nets. Think of it like a tool kit. It’s a format that is easy to use and allows you to do stuff with it. Does the toolkit work for everyone, no. Just like this PowerShell toolkit, it would be useless in a world without powershell. So chromeOS, this toolkit isn’t useful. This is the same with some beliefs. They are useful where they are, but not useful in other places. Sometimes these toolkits/nets, are useful for some but not others. If you don’t know PowerShell, this toolkit wouldn’t be useful to you. If you are shame sensitive, some religions are not for you.
Everyone has their own tool set or net. No single tool set is inherently bad. It’s how we use them and where we use them. If you take a net to a small pond, get ready to waste your time and damage your net. If you throw your net aggressively into a aggressive sea, get ready to lose that net.
Homemade Nets
Once someone understands how the nets are made and how to repair them, It’s always best for them to start building their own nets using the techniques they have used on their previous nets. By having a net/toolset of your own, this allows you to have full knowledge and be able to repair quickly. This belief system would be uniquely yours and different from others. So, when it breaks, you can grow it, replace parts, and more without having any problems. It’s yours and no one elses.
This past month I was given a task to Uninstall a few applications in Intune. However, the app’s uninstall feature did not work according to plan. However, a bunch of these worked with the cim methods of uninstall. Which I thought was funny. After writing a bunch of the same code over and over again, I decided to write a General Uninstaller for Intune. This also requires a custom Detection Script.
Here we have a general uninstaller for Intune. This script allows us to feed the product name as is or we can add wild cards to the name. We start off the script by grabbing the product names from the user. This will be done during the intune setup. When it deploys, the first thing this script does is grab all the applications inside the win32_product. If the application didn’t register in this windows system, then this script is going to be pointless for you.
Once we have the products, we go through each Product Name. We first check to see if the product is on the system. If it isn’t, we output success and exit with a unique exit code. This will be used later. However, if the product is on the machine, we grab the install location. Then we pipe the product into the uninstall method in the cim method command. Finally, we see if the install location exists inside the installed object. Some applications give us this information some don’t. Some give us multiple locations while others don’t.
To work around this, we check if install location property is null. If it isn’t null, then we move on and start a loop. The loop is only because some install locations have more than one location. Then we test if the file path still exists. Sometimes, the applications uninstaller will remove the folder, sometimes, it doesn’t and that’s why we test. If the file location is there still, we remove it with a good old force and recurse. Finally, we exit with the unique exit code.
With any custom script installs or uninstalls, a custom detection script is necessary. The first step is to grab the product names. Just like before, it’s a list of strings. So they can do more than one. Then we grab all the products with our cim instance and win32_product. Then we loop through each product name and see if the product exists still. If it does, we exit with a 1. This basically says, I failed! Intune needs a string and an exit code of 0 to be successful. The exit of 1 without the string ends the script and without that string, intune assumes failure. However, if we go through them all, and none trigger the exit, then we are safe to exit with a 0 and the beautiful word success.
Building it out in Intune.
Building the IntuneWin File
The first thing you will need to do is save your script into a folder and then download the WinIntuneApp, aka, Win32 Prep Tool, to package up the powershell script. Unpackage this tool and start up your command prompt. The application will guide you through the process of setting up a intunewin app.
General Uninstaller for Intune
Please specify the source folder: This is the folder that will have your script inside of it. If you wanted to create something more complex, this part would change your way of deployment. Future blog post coming.
Please Specify the setup file: This is going to be the powershell name. General-Uninstall.ps1
please specify the output folder: This is the folder that the intunewin file will be dropped.
Do you want to specify catalog folder (Y/N)? This one is for more advanced packages. We can say no to this option for this setup.
Setting Up Intune for Your Uninstaller
Now we have the IntuneWin file. It’s time to setup the intune Deployment. This is where you will be able to add things like the productname to our General Uninstaller for Intune.
Navigate to Endpoint Manager
Click Apps
Click Windows
Click Add
Click the Select App Package File.
Add the General-Uninstall.IntuneWin file.
Click ok
Change the Name
Click the edit Description and add a detailed description for other users. Make sure to provide instructions on what to do with the detection script.
The publisher can be your company or in my case self.
The gategory is going to be computer management as it is a general uninstaller.
Feel free to add any additional information. Link this blog post if you wish for the information URL.
The Uninstall command can be as simple as a removal file.
Device Restart Behavior: Determine behavior based on return codes
Return Codes: Remember that unique exit code we had in the script. This is where you would place that code. I have it as 1212 is a success.
The next screen the requirement screen. We can do a lot with this screen, but we don’t need to here.
Operating System Architecture:
32
64
Minimum Operating System: Windows 10 1607.
Now we need to setup the custom detection.
Select User A custom Detection Script
Validate your product names to be uninstalled.
Upload and click next.
Accept the defaults for Dependencies and Supersedences.
The final screen is where you are able to assign the script to people. There are three sections. Required, aviable for enrolled devices and uninstall. This is where you will select who is going to get what.
Testing, Monitoring, and deployment
The assignment area is where you assign the script to who you want. This is very important. Here is where you would want to test the users. Have a test group and apply it first. H
Deploy the uninstall app to the test device group.
Monitor the Intune deployment status for the app to ensure successful deployment to devices/users.
Test if the application is still on a target computer. This can be done with control pannel, powershell, and more options.
Redefine and correct any issues and restart the testing.
Deploy
What can we learn as a person today?
When was the last time you threw a rock? How about a rock in a lakes? The last time you did, did you notice the ripples? Just like a deployment like this can cause ripples in your company, removing things from your life can cause just as many ripples in yourself. Make sure you are ready to let go of that thing you are holding onto. It’s always a good idea to test it out, or have a support group to help you. Those ripples can do some damage. So be ready to Uninstall parts of your life before you do it.
A few weeks ago, we built WordPress in Docker. Today I want to go deeper into the world of docker. We will be working with a single WordPress instance, but we will be able to expand this setup beyond what is currently there over time. Unlike last time we will be self-containerizing everything and adding plugins along with the LDAP php which doesn’t natively come with the WordPress:Latest image. It’s time to build an WordPress in Docker with LDAP.
Docker Files
As we all know docker uses compose.yml files for it’s base configuration. This file processes the requested image based on the instructions in the compose. Last time we saw that we could mount the wp-content to our local file system to edit accordingly. The compose handles that. This time we are going about it a little differently. The compose file handles the configuration of basic items like mounting, volumes, networks, and more. However, it can’t really do much in the line of editing a docker image or adding to it. The compose file has the ability to call upon a build command.
The build is always within the service that you want to work with. the Context here is the path of the build. This is useful if you have the build files somewhere else like a share. Then the dockerfile will be the name of the build. I kept it simple and went with docker file. This means there are now two files. the docker-compose.yml and this dockerfile.
What are the Dockerfile
The docker file takes an image and builds it out. It has some limitations. The dockerfile can add additional layers that adds to the over all size of the image. Non-persistence is the next problem, by it’s ephemeral nature, it disappears after it’s first use. The file can only do a single threaded execution. Thus, it can’t handle multiple things at once. It’s very liner in it’s nature. If than, and other structures are not present in the docker file. This makes it hard for it to be a programing language. There are limits to versioning.
The docker file cannot work with networking or ports. There is no user management inside the dockerfile process. Complexity is a big problem with these files as the more complex, the harder it is to maintain. Never handle passwords inside the dockerfile. The docker file can’t handle environmental variables. The thing that hit me the hardest, limited apt-get/yum commands. Build context is important as dockerfiles can slow down performance. Finally, dockerfile’s may not work on all hosts.
With those items out of the way, docker files can do a lot of other good things like layering additional items to a docker image. The container treats these files as root and runs them during the build. This means you can install programs, move things around and more. It’s time to look at our dockerfile for our WordPress in Docker with LDAP.
The Dockerfile
# Use the official WordPress image as a parent image
FROM wordpress:latest
# Update package list and install dependencies
RUN apt-get update && \
apt-get install -y \
git \
nano \
wget \
libldap2-dev
# Configure and install PHP extensions
RUN docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ && \
docker-php-ext-install ldap
# Clean up
RUN rm -rf /var/lib/apt/lists/*
# Clone the authLdap plugin from GitHub
RUN git clone https://github.com/heiglandreas/authLdap.git /var/www/html/wp-content/plugins/authLdap
# Add custom PHP configuration
RUN echo 'file_uploads = On\n\
memory_limit = 8000M\n\
upload_max_filesize = 8000M\n\
post_max_size = 9000M\n\
max_execution_time = 600' > /usr/local/etc/php/conf.d/uploads.ini
The Breakdown
Right off the bat, our FROM calls down the wordpress:latest image. This is the image we will be using. This is our base layer. Then we want to RUN our first command. Run commands like to have the same commands. Remember, every command is ran as the container’s root. The first RUN command will contain two commands. The APT-Get Update and the install. We are installing git, this way we can grab a plugin, nano, so we can edit files, wget, for future use and our php ldap.
Please notice the && \. The \ means to treat the next line as part of this command. The && means and. The && allows you to run mulitple commands on the same line. Since each RUN is a single line, this is very important. The libldap2-dev is our ldap plugin for php. Our next RUN edits the docker php extension.
The Run Commands
RUN docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ && \
docker-php-ext-install ldap
docker-php-ext- * is a built in scripts to our WordPress image. We tell the configure where our new libraries are located for the PHP. Then we tell php to install the ldap plugin. After we have it installed, we need to do some clean up with the next RUN command.
rm -rf /var/lib/apt/lists/*
At this point, we have WordPress in Docker with LDAP php modules. Now I want a cheap easy to use plugin for the ldap. I like the authldap plugin. We will use the git command that we installed above and clone the repo for this plugin. Then drop that lpugin into the WordPress plugin folder. This is our next RUN command.
In our previous blog, we used a printf command to make a upload.ini file. Well, we don’t need that. You can do this here. We trigger our final RUN command. This time it will be echo. Echo just says stuff. So we echo all the PHP settings into our uploads.ini within the image.
First thing first, Notice everywhere you see the name “sitename”. To use this docker correctly, one must replace that information. This will allow you to build multiple sites within their own containers, networks and more. As stated before, the first thing we come accross is the build area. This is where we tell teh system where our dockerfile lives. Context is the path to the file in question and dockerfile is the file above.
Next, is the ports. We are working with port 8881:80. This is where you choose the ports that you want. The first number is the port your system will reach out to, the second number is the port that your container will understand. Our SSL port is 8882 which is the standard 443 on the containers side.
ports:
- "8881:80"
- "8882:443"
Next are the enviromental veriables. If you notice, some of the items have ${codename} instead of data. These are veriables that will pull the data externally. This approach prevents embedding the codes inside the compose file. The volume is the next part of this code. Instead of giving a physical location, we are giving it a volume. Which we will declare later. Next, we state the wordpress page is dependant on the mysql image. Finally, we select a network to tie this container to. The process is the same for the database side.
Finally, we declare our network with the networks. This network will have it’s own unique name, as you see the sitename is within the network name. We set this network to bridge, allowing access from the outside world. Finally we declare our volumes as well.
The hidden enviromental file
The next file is the enviromental file. For every ${codename} inside the docker, we need an envorimental veriable to match it. Some special notes about the salts for WordPress. The unique symbols, such as a $ or an =, in the code injection cause the docker to break down. It is wise to use numbers and letters only. Here is an example:
As always, grab your salts from an offical source if you can make it work, Here is the WordPress Official source site. You can also use powershell to give you a single password, take a look here. Of course, replace everything in this file with your own passwords you wish. If you have the scripting knowledge, you can auto-generate much of this.
Bring Docker to Life
Now we have all of our files created. It’s finally time to bring our creation to life. Run the following command:
docker compose up -d
If you notice, there are additional information that appears. The dockerfile will run and you can watch it as it runs. if there are errors, you will see them here. Often times, the erros will be syntex issues. Docker is really good at showing you what is wrong. So, read the errors and try finding the answer online.
What can we learn as a person today?
Men are born soft and supple; dead, they are stiff and hard. Plants are born tender and pliant; dead, they are brittle and dry. Thus whoever is stiff and inflexible is a disciple of death. Whoever is soft and yielding is a disciple of life. The hard and stiff will be broken. The soft and supple will prevailLao Tzu
In seeking assistance from forums like the sysadmin subreddit or Discord channels, I often encounter rigid advice, with people insisting on a singular approach. This rigidity echoes Lao Tzu’s words: “Men are born soft and supple; dead, they are stiff and hard… The hard and stiff will be broken. The soft and supple will prevail.” In professional settings, flexibility and adaptability are crucial. Entering a new company with an open mindset, ready to consider various methods, enables us to navigate around potential obstacles effectively. Conversely, inflexibility in our career, adhering strictly to one method, risks stagnation and failure. Embracing adaptability is not just about avoiding pitfalls; it’s about thriving amidst change. Lao Tzu’s wisdom reminds us that being pliant and receptive in our careers, much like the living beings he describes, leads to resilience and success.
It’s time to build on our Docker knowledge. WordPress is a powerful web platform that a large part of the internet is built on. This site is built on WordPress. Whenever I am working on a site for a friend, I will build myself WordPress and then create their site there in my test environment. When I get it the way I want it, I move it and destroy the original. The best way to destroy the original is to wipe it from existence. This is where Docker and WordPress are friends.
Docker and WordPress
This method will allow you to have multiple WordPress sites with your docker image. The reason we want to be able to do this is because this allows us to test between site actions and more. It’s one of those amazing little tools that saves so much time. Before that, we want to do some basic things to get everything setup. The first thing is the networking. We want to build a network in docker for our WordPress sites. We do this outside of the compose because making if/then statements in a compose is a mess. This also allows you to have multiple networks and so on and so forth. We do this with the command “Docker Network Create”. Of course, you want to be using the docker user or sudo user.
docker network create dockerwp
Docker Compose File
Now we have our docker network built, we need to build our compose file. Inside the folder you keep all of your dockers, I Suggest making a new folder called wordpress and moving into that folder. Then create a docker-compose file using the nano command.
mkdir wordpress
cd wordpress
nano docker-compose.yml
Next you will want to copy and past the docker compose below into it.
version: "3.8"
services:
sitename-db:
image: mysql:latest
volumes:
- ./sitename_db/data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: AmazingPasswordOfAwesomeness
MYSQL_DATABASE: sitename_wp_db
MYSQL_USER: sitename_wp_user
MYSQL_PASSWORD: AnotherAmazingPassword
sitename-wp:
image: wordpress:latest
depends_on:
- site1-db
volumes:
- ./sitename_wp/wp-content:/var/www/html/wp-content
- ./sitename_wp/uploads.ini:/user/local/etc/php/conf.d/uploads.ini
# Add other files or folders that you want to override here e.g. stylesheets
ports:
- "8880:80"
restart: always
environment:
WORDPRESS_DB_HOST: sitename-db:3306
WORDPRESS_DB_NAME: sitename_wp_db
WORDPRESS_DB_USER: sitename_wp_user
WORDPRESS_DB_PASSWORD: AnotherAmazingPassword
networks:
dockerwp:
name: dockerwp
external: true
From there, you can run the command “docker compose up -d” to create the wordpress page with the default settings. I don’t suggest it, but you can. How can you use this docker compose? Firstly, replace wherever you see “sitename” with the sites name you want. If you want more than one, you can copy the db and wordpress sections over and over again. Each time replacing the site name with something different. Make sure to change those amazing passwords.
How does this compose work?
This docker compose works by creating individual worlds for each site. The word sitename allows you to rename everything the way you want. So if you wanted therandomadmin_com-db, that can happen. if you want therandomadmin_org-db that can happen to. Each one can have it’s own name. This is what splits them apart. The network they share allows them to talk with each other and back out again. Uploads.ini allows the sites to have their own custom upload counts. I will go over that in just a minute. Just imagine them as little cups with two unique coins. As long as they are named the same they can talk to each other. If you wanted to, you can take it a step farther and make a new network for each compose. However, that can get messy quick trying to herd all of those networks into one place.
Next steps
The volumes part of the compose services creates folders. Each folder is important because it holds the content for that container. Notice in the wordpress volumes. You will see a ./sitename_wp/uploads.ini. This is very important as it controls how much data can be uploaded. Each site has it’s own. Thus, you can use the command below to create a simple file in each container. To activate those files, restart the container.
This command will create the ini file that tells the system how much you can upload. I have it set to 64 megabtyes, but you can set it to whatever you want. By default, the size limitation is 2mb. Which is extremely small for now day images.
Finally, you can use the nginx reverse proxy system to assign the ssl to each site as you see fit. I personally don’t do this as I don’t expose the site to the outside world, but you can do so. The instructions were covered in the previous blog about ladder. Believe it or not, that’s it. The next few steps would be to go to the site’s ip or hostname whichever you choose and set up your wordpress like normal.
What can we learn as a person today?
Recently I went to a Tech networking event where I met multiple new and unique people. I enjoyed every minute of talking about tech with each of them. While talking to them, I learned of new ways to use my dormant skills. Things like body language, mental health knowledge, and even down to my cooking was improved. We talked about things like IT, AI, and the color of the sky in some cases. It was a pleasure. Later I was the one helping others on a local discord server. We talked about the day and things we needed.
What spoke to me while working on this blog post was each WordPress has it’s own container and it’s own world, but the network is the same. This allows the WordPress installs to talk with each other and share items easliy. That’s the same way we are as humans. We are all unique in our own ways. I can be someone who enjoys reading a good white paper about mind bind while someone else can enjoy reading a good book at how pepsi cola is made. We are all different. What we have in common is our networks.
Without our networks, we can’t go far. Imagine the WordPress hosted on it’s own network, but that network can’t leave your lab. Would it be useful to the outside world? How about this site? What if I locked it down so only 1 other IP address could read it. This blog wouldn’t be helpful to you. This is how our networking is. If we lock down ourselves to only one group of people, we can’t grow and they can’t grow. This is often times how cults are made. They lock themselves down to only themselves and whoever they can recruit.
Think about it
As you go throughout your week this week, think about your networks. If you go to church, that’s a network, if you go to school, that’s a network. How about your discord friends? That’s a network as well. Each place has it’s own network, even if that place is temporary like a store. What can you bring to those networks, and what can you learn from those networks?
The other day I was searching for a piece of code for work. One of the links I clicked was geo locked to the EU only. Which threw me off. I didn’t have a VPN on the computer. So what do you do? We use a web proxy. Last week we talked about a reverse proxy. A web proxy is a website that you can use to look like you are from that site’s hosts. Most of the bigger sites will block you from using a web proxy, but simple sites have no idea. Everywall built a simple web proxy that we can use in docker. This is where we get to use Ladder with Docker.
What is Ladder
Ladder is a web proxy. So, when you install this inside your homelab, or wherever you can with docker, you can enter a url into the link and it will take you there from that machine. The example above, my ladder install had to be on a machine in the EU for me to access the site. Fun part is, I had a box in the EU to work with. A web proxy works by being in the middle. When you enter your url you wish to go to, the web proxy acts like the browser and sends the request from itself. Then it brings that information back to you and displays it. Ladder appends the link to the back of it’s url. This way you can edit the url if need be. So, if you go to “Therandomadmin.com” while using your ladder, it will think you are coming from the ladder instead of your browser. You could be at work, using your ladder to view the rest of the world. Thus, you can see things from your home. Yes, this can get around filters.
How Do you install Ladder
I have Ladder with Docker. First thing first, always check out the official documentation, you can do that here. We will be using our reverse proxy from our previous blog post, here. Docker is going to be our go to here. First, we need to log into our server using ssh. Once you get into your server, navigate to where you are holding all your docker folders. Next, you will need to use the mkdir command and make a folder called ladder. Then CD into it.
mkdir ladder
cd ladder
Now, inside the “ladder” folder, we want to create a compose file. Now we are in the folder. It’s time to build the compose file by using the nano command. We want to build a “docker-compose.yml” file.
nano docker-compose.yml
You will be brought into the editor where you can write the docker file. You can simply copy the information below and past it into the text file.
To save, all you have to do is press ctrl and x and follow the prompts.
Breakdown of the Yml
Like before, we are starting off with version 3 of docker. Our service is called ladder, and the image is from ghcr.io. Everywall is the company and ladder is the image name. We are grabbing the latest. The container’s name will be ladder. We will set the restart to always restart unless we stop it. This will allow it to survive a reboot. Next, we will be using the environmental flags. We want to use our port 8080 and have our ruleset accordingly. Later we can build a unique rule set. Then we want to select our ports. The system port will be 8080, we can change this to whatever we want. The image port is 8080. Finally, we build our volume. We need a /app/ruleset.yaml and a /app/form.html. Ladder has additional options, and you can find that information from the official documentation. Of course, you will need to start the Image. Do so by using the docker compose commands with the d flag.
docker-compose up -d
# If using docker-compose-plugin
docker compose up -d
Now navigate to your http:<ip address>:8080 and confirm the site is up and running.
Pointing your Reverse Proxy to your Ladder with Docker
Now, we want to point our reverse proxy we made in the last post to our ladder. Lets follow these steps:
Navigate to your reverse proxy and log in
Click On the dashboard button if you not already brought to it.
Click “Proxy Hosts”
Click “Add Proxy Host”
Enter your name for the ladder. Remember to have the DNS already setup for this.
Enter the IP address you wish to forward to.
Enter your port, in this case it will be 8080
Select “Websocket support”
If you want to have a custom SSL for this site, Complete by doing the next.
Click SSL
Under SSL Certificate, select request a new SSL Certificate.
Enter your email address and check the agree to the let’s encrypt terms and service.
Click Save
If your DNS is pointing, and your Ladder is working, your system will be assigned a SSL. Now, your ladder is ready to go. I hope you enjoy.
What can we learn as a person today?
As you see in this post, it builds on the last post. Most of our lives have been built on something from our past. I know powershell really well. Now imagine, if I suddenly couldn’t read. All those skills would be gone. Our minds are built on stages of knowledge and skill sets. Inside the brain, there is a network that is more complex then the world’s road systems. If you are studying something that that really has no usefulness right this minute, it may a few years down the road because Knowledge builds upon itself. I didn’t know why I was studying virtual hosts for redhat servers back in the day. Now you are reading my blog. Sometimes the knowledge is wasted space or damaging. Those are still there, but they are like the awkward emails, they go to trash at some point. As a person, you can choose to build on your skills and grow any way you choose.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok