Find LOG4J with Intune Proactive Remediations

How to use LOG4J and Intune Proactive Remediations to start looking for potentially vulnerable systems.

On December 10th 2021 CVE-2021-44228 was unveiled- queue the mass panic. A simple logging component which had been around for… forever in a whole bunch of things including but not limited to Minecraft.

Before you read any further, let me be clear, finding a “file” that has or is vulnerable to this exploit this is not the end of dealing with this vulnerability. I have tried to cover all the scenarios I can think of, however I am not a genius and there is no way to write this for every possible scenario. It’s one half of the whole. You absolutely need to start continuously hunting for the behaviors attackers would use this class for, while constantly checking for the existence of it in your environment.

If you’re just here for some code, this link should help you.

Vulnerability Hunting V.S. Exploit Hunting

Hunting this threat is in and of itself challenging.

When the vulnerability first came out there were several scripts that floated around the internet geared towards hunting the hashes of the vulnerable files. However as things continued to develop people started to ask – do these hashes change if someone nests a JAR inside a JAR? Additionally, what if an attacker changes the name of the product or the class is referenced in another JAR? And how do we tell the difference between something that is just not updated, and something being exploited? For more reading on that I suggest the following MSTIC team articles:

Defender for Cloud finds machines affected by Log4j vulnerabilities (microsoft.com)

Microsoft’s Response to CVE-2021-44228 Apache Log4j 2 – Microsoft Security Response Center

What can you do today with Intune Proactive Remediations?

In Configuration Manager we have CI’s and they work amazingly well for hunting the state, and contents of specific files.

I’m going to put a second disclaimer here, don’t use this script as an end all to everything. There is no one size fits all for this vulnerability, and in fact just saying “we patched all good” might not even be good enough for a while. Since last week, I’ve written three or four different methods to try and detect “do we need to patch this vulnerability.” All three of these methods have different pros’s and cons based on number of files, age of device, and are you looking for obfuscations. In general people have landed on three main different methods.

  • Hash Validation, based on file name
  • File Name Detection
  • File Extension Detection and Class Validation

The script I’m going to show uses the last of these three options. All of these have their own unique pro’s and con’s. I could spend hours on these, but I don’t think that’s why you’re here if this is something you’re interested in let me know and maybe I’ll do some type of follow up.

If you’ve never created a Proactive Remediation before, rejoice because the most complicated part of creating one, is finding where they are located.

The Script for the Proactive Remediation

This article assumes you know how to create a proactive remediation, and will only be covering an explanation on the script used for detection.

Search-Log4JClassInfo.PS1

#Warning on potentially could scan synced sharepoint libraries.

#Note this is using CIM instance - this will NOT work for old Servers and is not intended to be used on them.

$drives = (Get-CimInstance -Query "Select DeviceID from win32_logicaldisk where drivetype = 3").DeviceID

#Set the search string we are hunting for.
$searchString = "*.jar"

#Add type for reading the JarFiles 
Add-Type -AssemblyName "system.io.compression.filesystem"
#Create an object to store found risks

$foundRisks = New-Object -TypeName 'System.Collections.Generic.List[psobject]'
Foreach ($drive in $drives) {
    #Assemble the risky files for the drive.
    $riskyFiles = (&cmd /c robocopy /l $(($drive) + '\') null "$searchString" /ns /njh /njs /np /nc /ndl /xjd /mt /s).trim() | Where-Object { $_ -ne "" }
    #Evaluate each set of risky files to see if there is anything to Evaluate.
    Foreach ($file in $riskyFiles) {
                $data = $null
                $detections = $null
                try{
                    #Warning this could could potentially create a lock on a Jar file - we do dispose of the connection and read at the end but based on size it could take a moment.
                    $data = [io.Compression.Zipfile]::openRead($file)
                    $detections = $data.Entries | Where-Object {$_.fullname -like "*jndiLookup.class"}
                    $data.Dispose()
                }
                catch{
                    $hash = [ordered]@{
                        fileName = $file
                        class = "UnableToRead"
                        fileHash = $((Get-FileHash -Path $file -Algorithm SHA256).Hash)
                    }
                    $foundRisks.add((New-Object -TypeName psobject -Property $hash))
                }
                if($detections){ 
                    foreach($detection in $detections ){
                        $hash = [ordered]@{
                            fileName = $file
                            class = $detection.FullName
                            fileHash = $((Get-FileHash -Path $file -Algorithm SHA256).Hash)
                        }
                        $foundRisks.add((New-Object -TypeName psobject -Property $hash))
                    }
                }
            }
            
        }
If ($($foundRisks | Measure-Object).count -ge 1) { 
    foreach($risk in $foundRisks){
        #Assemble a Single Large Write Host Command for PR
        $jumboTune = "$jumboTune Found: $($risk.FileName) with Hash:$($risk.fileHash) and Class: $($risk.class)`n"
    }
    Write-Host $jumboTune
    exit 1
}
Else { 
    Write-Host "No Vulnerabilities found"
    exit 0
}

What’s so special about this script? Well, it’s fast. No seriously it’s REALLY fast. We are talking evaluate every file in 600GB in ~19 seconds fast. Most of this is thanks to Robocopy.

Now let’s break down the code and what’s happening here.

Initial Gathering of Drives
#Note this is using CIM instance - this will NOT work for old Servers and is not intended to be used on them.

$drives = (Get-CimInstance -Query "Select DeviceID from win32_logicaldisk where drivetype = 3").DeviceID
#Set the search string we are hunting for.
$searchString = "*.jar"

#Add type for reading the JarFiles 
Add-Type -AssemblyName "system.io.compression.filesystem"
#Create an object to store found risks

$foundRisks = New-Object -TypeName 'System.Collections.Generic.List[psobject]'

Nothing fancy here, just know that this is using CIM instance, assuming you’re using this script in Intune, you should only have machines that support this command. Additionally we set our search string – in this case anything that ends with “*.jar”. We add a type assemble – more on that later, and then create a list of PSObjects. Didn’t have to do that, but I like to use lists it’s a habit from when I’m not sure what version of PowerShell is in play.

We then iterate over each drive and do the following:

Gather RiskyFiles
$riskyFiles = (&cmd /c robocopy /l $(($drive) + '\') null "$searchString" /ns /njh /njs /np /nc /ndl /xjd /mt /s).trim() | Where-Object { $_ -ne "" }
    #Evaluate each set of risky files to see if there is anything to Evaluate.
    Foreach ($file in $riskyFiles) {
                $data = $null
                $detections = $null
                try{
                    #Warning this could could potentially create a lock on a Jar file - we do dispose of the connection and read at the end but based on size it could take a moment.
                    $data = [io.Compression.Zipfile]::openRead($file)
                    $detections = $data.Entries | Where-Object {$_.fullname -like "*jndiLookup.class"}
                    $data.Dispose()
                }
                catch{
                    $hash = [ordered]@{
                        fileName = $file
                        class = "UnableToRead"
                        fileHash = $((Get-FileHash -Path $file -Algorithm SHA256).Hash)
                    }
                    $foundRisks.add((New-Object -TypeName psobject -Property $hash))
                }

The first line leverages Robocopy, to gather only the string to file names using the /l command and some other switches to turn off the noise and speed ups the process of gathering the files. This then provides a list full of files with “*.Jar” at the end.

Now it’s time to go to work. We use the IO.Compression.ZipFile class to open and read the contents of each of the .JAR file. We then look at all of the “entries” for anything that has the jndiLookup.class.

We look for this, because if we find the class, we can be reasonable sure regardless of if there is a hash match, or if the file names don’t match that the machine is potentially at risk.

We then add some important bits like where the file is located and it’s hash to our storage and continue on.

We could just as easily swap this out for any of the other methods, validating if the .jar matches the known hash lists, or just reporting based on the name of the jar.

If ($($foundRisks | Measure-Object).count -ge 1) { 
    foreach($risk in $foundRisks){
        #Assemble a Single Large Write Host Command for PR
        $jumboTune = "$jumboTune Found: $($risk.FileName) with Hash:$($risk.fileHash) and Class: $($risk.class)`n"
    }
    Write-Host $jumboTune
    exit 1
}
Else { 
    Write-Host "No Vulnerabilities found"
    exit 0
}

Finally here at the end we compound together the names of the potentially vulnerable files, and write them to the host screen before existing with Exit Code 1 to properly send “risk” back to Intune.

Once the script runs based you should get some nice outputs which you can then use to evaluate and start to make decisions on what should or shouldn’t be remediated. Keep in mind this doesn’t fix the issue as fixes will continue to evolve. This just helps you identify the problem. Here is an example of what the output looks like on my home machines, which happened to have an old copy of Minecraft on it.

Closing Thoughts

I urge you to keep in mind that this is an evolving threat. The first “patch” to remediate the CVE was found to not always work. The second one encouraged people to just flat out remove the jndi lookup.class from the class path. I wouldn’t be surprised if we find several more things along the way. Your mileage may vary on how you look for vulnerability and I encourage you to think about what you are hunting for. Happy Patching.

  1. Hey man, I was looking to write something like that because I’m getting tired of patching this lib every day of the week. I’d like to add a function to download the latest lib files from Apache (or actually most probably an Azure blob storage we control) and automatically replacing the offending files. Do you think that’d be fine? Is there something I should consider before going that route?

    Reply

    1. Good Morning! I think someone wrote a tool that does that in GO, or maybe it was PYTHON within the last week. I can’t speak for your environment and if it would, or would not be fine as it will depend on “why” the file existed. unless you are supremely confident in the reason for them to exist and that replacing it won’t break anything.

      Reply

  2. Fantastic work Jordan. Thank you for this! Running it in our environment now. This will be a big help.

    Reply

    1. Glad it’s helping you.

      Reply

  3. What did you run as the remediation script?

    Reply

    1. There is no remediation script, this is just a way to find at risk machines. Because of how the software is used in Java it would difficult for me to advise people to simply “run this script it will fix all your issues” with no understanding of the software that was using the JAR.

      Reply

Leave a Reply to Dave Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: