PowerShell, AWS, Jenkins and continuously enforced security groups

I recently did a series on how to spin up EC2 instances using Powershell, Jenkins and cloudflare. One of the things I really liked about the series was how you could limit the security groups presented to the user when generating the instance. If you have a passing experience with the AWS console you are probably aware it is easy to create a new security group when spinning up an EC2 instance. Since it takes a little bit of extra work to recreate the instance with the correct security group you often end up with a sea of generic security groups. The groups themselves are not a problem, but if your team is leaving RDP and SSH open to the world it can raise additional security concerns.

The script I wrote integrates with Jenkins and allows you to enforce IP rules for these security groups.
In order for the script to work you will need to have completed the “Getting started with AWS powershell plugin
I also recommend you check out  Matthew Hodgkins’ 2 part blog series on getting started with Jenkins and Powershell
There are no additional Jenkins plugins needed.

You can get the script I will be talking about from my github here. I wrote 2 versions, I’ll mostly be focused on the dynamic IP version and that will be what most of the code snippets and Jenkins build will be about in the post. I’ll make notes on the differences for the other Jenkins build later down since that is the major difference in each version.

 

The work flow:

First we’ll focus on the Jenkins build.

The script takes a couple of environment variables from jenkins to launch. We’ll need the “AWSprofile” parameter with the names that were pre-saved in the powershell plugin. Since these are windows profile dependent it is recommended to set them up as the service account running Jenkins.

The other variable the script needs is the ports we want to limit to our current external IP. I wrote this script with the idea of limiting management ports so it will mindlessly lock down all ports listed. So if you enter port 80 in the list then only you will be able to reach port 80 from all your EC2 instances.

 

In the end the build will appear like this:

Note: The script reads the build parameters and uses a ‘;’ colon delimiter.

The code that loads the parameters:

import-module awspowershell
#AWS stored Credential names
$profile_list = $ENV:AWSProfiles -split ";"
#Path for log file
$path_log = $ENV:Path_log
#Uses Value "From Port in AWS" SSH is 22, RDP is 3389. This script expects a standard port 22 maps to port 22 design
$Search_ports = $ENV:Ports_list -split ";"

Next we load the Get-ExternalIP function. This function reaches out to the ipify.org API and parses the Json response and returns the CDR formatted response for the AWS firewall.

function Get-externalCDR() {
try {
$ip = $(invoke-restmethod 'https://api.ipify.org?format=json' | select -expandproperty IP) + "/32"
return $ip
} catch {
return $false
}
}
#Allowed IP Ranges.
$Allowed_IP_Ranges = Get-ExternalCDR

Load the profile names and then start iterating through all the AWS regions:

foreach($profile in $profile_list) {
#Incase of Security group overlap clear the id list for every profile
[array]$updated_group_id = $NULL
#Set the AWS profile to use
Set-AWSCredentials -ProfileName $profile
#Iterate through all possible regions
$region_list = Get-AWSRegion | select -expandproperty Region
foreach($region in $region_list) {

Next we iterate through each instance and grab the assigned security group ID and Name:

$Instance_list = Get-EC2Instance -region $region |select -expandproperty instances
$VPC_list = Get-EC2Vpc -Region $region
foreach ($VPC in $VPC_list) {
$Instance_list | Where-Object {$_.VpcId -eq $VPC.VpcId} | foreach-object {
$Instance_name = ($_.Tags | Where-Object {$_.Key -eq 'Name'}).Value
$SecurityGroups = $_.SecurityGroups.GroupName
$SecurityGroupID = $_.SecurityGroups.GroupID

Next we confirm that we haven’t already touched this particular group. This is just to save time in larger AWS environments. If we update Security Group A then every EC2 instance with Security Group A is already up to date.

if($updated_group_id -notcontains $SecurityGroupID) {

Now the good part. We take the Array of Ports we defined in the Jenkins build to check and make sure the security group we are checking has them present. IF it doesn’t we just skip the group. Then we check that the allowed IP is not in our Array of allowed options. Since this is the Dynamic script it is checking for our extremely limited /32 CDR it’ll remove the rule if it doesn’t match. I utilize the compare-object command in order to skip rules that are correct.

foreach($port in $Search_Ports) {
if($Found_IP_List = $(Get-EC2SecurityGroup $SecurityGroupID -Region $region ).IpPermissions | where { $_.FromPort -eq "$port" } | select -expandproperty IPRange) {
$Removable_IPs = Compare-Object -ReferenceObject $Allowed_IP_Ranges -DifferenceObject $Found_IP_List | where { $_.SideIndicator -eq "=>" } | select -expandproperty InputObject
foreach($IP_Current_Rule in $Removable_IPs) {
$Time = Get-date -format "s"
echo "$Time : Removing $IP_Current_Rule from $SecurityGroups ( $SecurityGroupID ) with $profile in $region found on $Instance_name"
echo "$Time : Removing $IP_Current_Rule from $SecurityGroups ( $SecurityGroupID ) with $profile in $region found on $Instance_name" >> $path_log
Try {
$Firewall_rule = @{ IpProtocol="tcp"; FromPort="$port"; ToPort="$port"; IpRanges= "$IP_Current_Rule" }
Revoke-EC2SecurityGroupIngress -GroupId $SecurityGroupID -IpPermissions $Firewall_rule -Region $region
} catch {
echo "$Time ERROR: REMOVING $port for $SecurityGroups ($SecurityGroupID)"
echo "$Time ERROR: REMOVING $port for $SecurityGroups ($SecurityGroupID)" >> $Path_log
$_
$_ >> $Path_log
exit 1
}
}

We are still in the Foreach loop of $port in $SearchPorts, we are also in the IF check where the $found_IP_list variable is defined. We next need apply allowed IPs to the Security group. We re-compare the ips and make sure that we don’t try to apply IPs twice. The goal in this to clean everything up:

$Allowed_IPs = Compare-Object -ReferenceObject $Allowed_IP_Ranges -DifferenceObject $Found_IP_List | where { $_.SideIndicator -eq "<=" } | select -expandproperty InputObject
foreach($IP in $Allowed_IPs) {
if($Found_IP_List -notcontains $IP) {
$Time = Get-date -format "s"
echo "$Time : Adding $IP from $SecurityGroups ( $SecurityGroupID ) with $profile in $region found on $Instance_name"
echo "$Time : Adding $IP from $SecurityGroups ( $SecurityGroupID ) with $profile in $region found on $Instance_name" >> $path_log
Try {
$Firewall_rule = @{ IpProtocol="tcp"; FromPort="$port"; ToPort="$port"; IpRanges= "$IP" }
Grant-EC2SecurityGroupIngress -GroupId $SecurityGroupID -IpPermission @( $Firewall_rule ) -Region $region
} catch {
echo "$Time ERROR: ADDING $port for $SecurityGroups ($SecurityGroupID)"
echo "$Time ERROR: ADDING $port for $SecurityGroups ($SecurityGroupID)" >> $Path_log
$_
$_ >> $Path_log
exit 1
}
}
}
}

We are now out of the If($Found_IP_List check. There is the final step off adding the group to the skip Variable $updated_group_id. This again is to save time in large environments.

$updated_group_id += $SecurityGroupID.Trim()
}
}
}
}
}

There are some risks associated with this script, but they are pretty minor. The major issue is if the connection to AWS via powershell is interrupted it is possible a Security Group will not have a management port re-added. If it isn’t re-added then re-running the script won’t fix it. You’ll need to go through Log file the script generates to find the old setting. That is the reason I log both to the Jenkins console AND to a file so you have the ability to find a history of old settings. This is a blunt tool that works well for standardizing an environment. You may need to modify it to fit your environment.

 

Bonus Array defined IPs

Earlier I said I had worked on two versions. The other version is almost identical to this version except it accepts an Array of IPs. You find that version in the same github I’d listed earlier.

The major difference is the Jenkins build needs a new parameter added:

And the variable:

$Allowed_IP_Ranges = $ENV:Allowed_IP_list -split ";"

 

Thanks for reading.

I_Script_Stuff

Jenkins, Powershell, AWS and Cloudflare Automated Deployment. (Part3 Cloudflare)

If you’re interested in only the code, export of the Jenkins template and list of needed plugins it can be found on my Github. I have made some changes since part 2 section was written. I would suggest updating if you’re using an old version of the script.

This is Part 3 of my series on deploying an EC2 instance and Integrating Cloudflare using Powershell and Jenkins. You can find my previous posts here Part 1 and Part 2.

In Part 2 section I walked you through the creating the Jenkins build and populating the Parameters, and at the end you should have been able to launch a successful EC2 instance. This section I will cover Injecting the Environmental variables into a second build step and utilizing the cloudflare API to create a non-proxied DNS entry. Non-proxied DNS entries act as standard DNS entries and are not affected by the cloud-bleed issue from a few weeks ago.

This post assumes you have a configured cloudflare account for the domain you want to auto deploy.  You will also need the Email and API key for your cloudflare account.

In your Jenkins instance go to the build that was created in Part 2 for launching EC2 instance. Scroll to the domain parameter and make sure you enter domains that match what is available in cloudflare, and that you want available for deployment.

Next  we need to add a new build step. Select “Inject environment Variables” and name the Properties file path “Build.prop” Make sure the build step is after the Powershell EC2 launch:

 

 


Connect to the console of your Jenkins Server, and go to the workspace. Normally that is C:\Program Files\Jenkins\workspace\<build name>. You can
Go back to one of your previous test builds and find the workspace path under the console output to verify.

Create a Blank file called build.prop


I go into more detail about the environment inject variable here

Returning back to the Jenkins web control panel create another Powershell build job and add the build2_Cloudflare.ps1 contents to jenkins

At the top of the script configure your API information:

Where the $email and $api_key variables come directly from cloudflare.

You can also decide on the default behavior on IP collisions within cloudflare. If a subdomain already exists in cloudflare this decides if the script should overwrite the IP. If the ip isn’t created the console will output the IP that was assigned to the new instance so it is reachable. If the variable $overwriteip is set to $true the script will attempt to update the domain, and if that fails try to create the domain. If it is set to $false then only the attempt to create the domain will be tried. If a failure registers the Elastic IP is output in the console so the user can still contact the new instance.

No other configuration is needed for the cloudflare segment. The flow simply grabs the domain and the AWS instance name and uses that to generate the FQDN. The Elastic IP injected into the file from the first buildstep like so:

$PublicIP = $ellastic_ip_allocation | select -expandproperty PublicIP
echo "Passing Env variable $PublicIP"
"ElasticIP = $PublicIP" | Out-file build.prop -Encoding ASCII -force

The second buildstep picks it up injected variable like any other variable:

cd $env:WORKSPACE
$EIP = $env:ElasticIP
$domain_partial = $ENV:Domain
$domain_Instance_Name = $ENV:Instance_Name
$domain_FQDN = $domain_Instance_Name + “.” + $domain_partial

For the cloudflare script and code  I go into a lot more detail on the functions on this post here.

I hope this blog series provided inspiration on how to build out a DevOPs friendly deployment. Something that allows the Development team the freedom to deploy servers, but allows for a consistent and secure environment that every Operations team needs to thrive.

Thanks for reading,
I_Script_Stuff

Jenkins, Powershell, AWS and Cloudflare Automated Deployment. (Part2 Configure AWS)

If your interested in only the code, export of the Jenkins template and list of needed plugins it can be found on my Github

Welcome to Part Two of my on going series about how to do an AWS deploy with cloudflare using Jenkins, and Powershell. Part One can be found here. In the last post I listed the needed plugins for Jenkins, plugins for Powershell and the work flow for the script. If you have been working with the code I posted I suggest looking at the latest on my Github. I have made a few minor improvements.

For this post I’m going to focus on the different configuration settings that are needed. Both in the Jenkins build and in AWS. By the end of the post you will be able to deploy an EC2 instance of your choosing to any configured Region.

The full set of Jenkins build options:

Build Options Break Down

Region

This is where we limit the regions the user will be able to deploy into. I currently have picked us-west-2, and us-west-1. Since us-west-2 is configured I’ll be building out requirements in us-west-1. You can pick which ever region you prefer for this example. A list of AWS regions can be found here.

Jenkins Build:

Under the hood with Powershell:

The region Choice parameter is assigned to a variable:

$region = $ENV:Region

Instance Name

Instance name is what the ec2 instance is named. It is also used later when deploying to cloudflare. No special configuration is needed outside the Jenkins build.

Jenkins Build:

I recommend leaving the String Parameter blank. This is due to behavior in the aws script.

Under the hood with Powershell:

Instance name is assigned to a variable.

$instance_name = $ENV:Instance_Name

Later in the code we check the instance was named something even if it is just a character long:

if($instance_name.length -le 1) {
echo "ERROR: Instance must be named and the length must be greater than 1."
echo "ERROR: Instance name: $instance_name"
echo "ERROR: Instance name length" $instance_name.length
exit 1
}

 

Domain

Not needed by the AWS deployment section, but it is used in the cloudflare deployment. There is no Powershell code in an AWS deployment that depends on this. I’ll touch on it more in the next post when I cover the cloudflare section. Just make sure the domains you want to use are listed, and they are already in cloudflare or you’re willing to transfer them to cloudflare.

Jenkins build:

Under the hood with powershell: Not used during this stage. We will cover it in the next post.

Image_type

Image IDs change from Month to Month and region to region. The month to month change is largely due to updates. I went with image description, and most up to date version of the image as the criteria for this section.

Jenkins Build:

Launch the AWS console to Go EC2 instances, and click on Launch Instance Wizard.

Select the image you want. For this example I’m going with a Public Windows AMI:

Notice the Ami ID is “ami-179ac977

On the Right hand side select “Community AMIs” and find your AMI again:

Now we will copy the description where it says  Microsoft Windows Server 2016 with Desktop Experience Locale English AMI provided by Amazon copy that and past it under a choice parameter labeled image_type of your jenkins build.

Under the hood with powershell:

$image_Type is assigned a variable near the top of the script.

$image_type = $ENV:image_type

Later on the $image_type combined with the $region variable searches for the description and reports the ImageID as a string.

Note the -First 1 flag select addition. This is because amazon can keep multiple images with different levels of updates. This makes sure that you always get the most up to date image:

Get-EC2Image -Owner amazon, self -Region $region | where { $_.Description -eq $image_type } | select -first 1 -expandproperty ImageId

Instance Type:
You are free to use any of the common model names for instance_types found here. In Jenkins the top choice is selected by default. I personally put the cheapest Instance for the project at the top. The build and project of course doesn’t care what order you sort them, so do what make sense for your project.

Jenkins Build:

Under the hood with Powershell:

The variable is defined. Since the Instance type is universal between regions and rarely changes that is all we need to do till the AWS instance is generated in the script.

$Instance_Type = $ENV:Instance_Type

 

Security_Group:

The Name of the Security Group must exist in all Regions you will be deploying in. I personally name them after the services that are allowed.

Go back to your AWS Management console, select the region you have chosen to configure. For me that is us-west-1.

In the EC2 dashboard go to the security groups option on the left. Create a security group and name it something that makes sense for your project.

In my case I created “SSH, HTTP, HTTPS, ICMP” and “RDP, HTTP, HTTPS, ICMP”

Set the rules by your security policy as well as your VPC settings that match your AWS policy.

Under the hood with Powershell:

The name of security group is searched and then the GroupID is assigned.

$SecurityGroup_Id = Get-EC2SecurityGroup -Region “$region” | where { $_.Groupname –eq “$SecurityGroup” } | select -expandproperty GroupId
AWS_Profile:

When originally setting up your Powershell AWS plugin under getting started it has you to store your credentials in powershell.

Make sure the Jenkins service is running as a user, and the AWS Plugin was setup under that profile. I named the profile name “Jenkins_deploy.” You can store multiple AWS profiles in this manner. The profile name doesn’t really matter so you can name it something more descriptive for your users.

Under the hood with Powershell:

The Environment variable is set, and the AWS profile credentials are loaded:

$aws_profile = $ENV:AWS_Profile

Set-AWSCredentials -ProfileName $aws_profile

Key_Pair:

In each aws region you will need to make sure there is a key pair uploaded and configured for the VM you are generating. If you plan on deploying to only one Region you can let AWS generate the keypair and be all set. If you plan on having the same multiple key pairs across regions I suggest looking into generating and importing the key pairs.

Under the hood with Powershell:

The Environment variable is set, and the AWS profile credentials are loaded:

$env:Key_pair = $Key_pair

 

Build Variables And Tagging:

If you installed the plugins from the last post you should see a check mark field:

This will populate the BUILD_USER and BUILD_TAG variables:

Under the hood with Powershell:

$builder = $ENV:BUILD_USER
$buildtag = $ENV:BUILD_TAG

#Once EC2 instance is created tags are added that are visible in the AWS console
echo “Naming Instance”
$tag = New-Object Amazon.EC2.Model.Tag
$tag.Key = “Name”
$tag.Value = “$instance_name”

New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region
echo “Tagging build information”
$tag.Key = “BuiltBy”
$tag.Value = “$builder”
New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region

$tag.Key = “BuildTag”
$tag.Value = “$BUILDTAG”

New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region

Optional:

So now you should have added all the Jenkins build parameters: Region, Instance_Name, Domain, image_type, Instance_Type, Security_Group,AWS_Profile and Key_Pair. If any of these will be static you can always skip the Jenkins build step by removing it and simply define the variable in the build as a static string:


$aws_profile = $ENV:AWS_Profile
#Load all the other environment variables AWS needs to to create an instance

$region = “us-west-1”
$instance_name = $ENV:Instance_Name
$builder = $ENV:BUILD_USER
$buildtag = $ENV:BUILD_TAG
$image_type = “t2.micro”
$Instance_Type = $ENV:Instance_Type
$domain = $ENV:Domain
$Key_pair = $ENV:Key_Pair
$SecurityGroup = $ENV:Security_Group

 

Add Powershell Build Step:

The final step is to add a powershell Build step and copy the powershell code into it.

It is easiest to pull the latest code off of my github and copy and paste it into place.

 

Now run the jenkins build.

A successful build’s console output will look like:


A failure should provide an error message that hints where you went wrong. I suggest checking for spaces, or typos. I used a try/catch method that should display the powershell error in the console output of Jenkins.

Alright, that is it for this week. In Part 3 I’ll combine the injected environment variables and talk about the cloudflare integration steps and the Powershell code driving it in Jenkins.
Thanks for reading

I_script_stuff

Jenkins, Powershell, AWS and Cloudflare Automated Deployment.

If your interested in only the code, and export of the Jenkins template and list of needed plugins it can be found on my Github

Introduction:

This multipart series is about using Jenkins to spin up an EC2 instance, add an elastic IP, and deploying the DNS to cloudflare using powershell. This is meant to be a template configuration for Windows DevOps teams to start building out environments. From here you can build your own AMI, or tie in your own github/svn code deploy or use ansible/chef/puppet/powershell/million other options to complete an automated deployment.

Some advantages to this method:

The user doing the build will be able to pick an instance, pick a security group, and deploy into multiple regions while Jenkins 100% controls the options available. This insures consistency across builds and allows you to limit who requires credentials to critical infrastructure. We can also apply tags and enforce mandatory tagging. This method also utilizes the concept of “only right answers.”  Though the user is given options, those options are limited within jenkins, and those limitations make sense for the project. This project is a template that will let you have confidence that tomorrow morning you will not wake up to a dozen d2.8xlarge with 5tbs of ESD storage to act as a cluster of DNS servers.

I have written posts about many of the tools used for this series ( AWS Report using Powershell ,Managing Cloudflare with Activedirectory, Jenkins EnvInject Plugin, Migrating Powershell Scheduled tasks to Jenkins). So if you want some other posts that cover the getting started concepts I’d recommend those.  I will note that  Matthew Hodgkins’ wrote a really great 2 part blog entry on getting started with Jenkins and Powershell, and the AWS plugin getting started  docs are a great resources if you are trying to do this project from scratch.

Plugins and Configuration:

For this project to work there are a few plugins you will absolutely need:

Jenkins (latest version)

Powershell 4.0

Aws Powershell plugin : Used to integrate the powershell script and AWS.

Environment Injector Plugin : Used to keep some variables between Jenkins build steps

User build vars plugin : Used to get data about who triggered the Jenkins build.

Role-based Authorization Strategy (Optional) : This can be used to limit which builds a user has access to. I’ll cover this in a later post.

 

Additional configuration notes:

I have Jenkins running as a service account. I did this due to wanting to run powershell plugins, and a few times I have run into odd behavior without a full user space provided. I havn’t tested to see if this works without it.

Workflow:

User has connected to Jenkins and found the build the want.

When initializing the build they encounter configuration options:

They Click build and Jenkins launches the build. The user waits a few minutes and on the AWS console, a new EC2 instance has spawned with Tags:

The Name has been Tagged “WebDeploy”, and two new tags have been created. The BuildTag that shows the Jenkins job that launched the instance. In this case since we have 256 Unicode characters to work with in aws and 200+ in Jenkins I was able to name the Build after the environment it was launched in (Prod or production), and a Billing code (Billcode0001), the last -2 is which Jenkins build the deploy is from.  The other tag is BuiltBy which is the user account in Jenkins that triggered the build.

After the AWS instance is fully online Powershell does API calls back to Cloudflare. A new entry is created, or an old entry is updated. Webdeploy is given the FQDN of webdeploy.electric-horizons.com:

Under the Hood:


#Import AWS powershell module
import-module awspowershell

#Enforce working in our current Jenkins workspace.
cd $env:WORKSPACE

#
#ENV:AWS_Profile is from the build parameters earlier it provides the AWS profile credentials
#

$aws_profile = $ENV:AWS_Profile
Set-AWSCredentials -ProfileName $aws_profile

#Load all the other environment variables AWS needs to to create an instance

$region = $ENV:Region
$instance_name = $ENV:Instance_Name
$builder = $ENV:BUILD_USER
$buildtag = $ENV:BUILD_TAG
$image_type = $ENV:image_type
$Instance_Type = $ENV:Instance_Type
$domain = $ENV:Domain
$public_key = $ENV:Public_Key
$SecurityGroup = $ENV:Security_Group

#Search for the Security group Name tag Value. More on this in the next post.

try {
$SecurityGroup_Id = Get-EC2SecurityGroup -Region “$region” | where { $_.Groupname -eq “$SecurityGroup” } | select -expandproperty GroupId
echo “Security group Identification response:”
$SecurityGroup_Id

} catch {
$_
exit 1
}

#Make sure that the Instance name is not blank.

if($instance_name.length -le 1) {
echo “ERROR: Instance must be named and the length must be greater than 1.”
echo “ERROR: Instance name: $instance_name”
echo “ERROR: Instance name length” $instance_name.length
exit 1
}

#Select AWS AMI. This is limited to the ones owned by Amazon. And gets the most up to date image.

try {
$image_id = Get-EC2Image -Owner amazon, self -Region $region | where { $_.Description -eq $image_type } | select -first 1 -expandproperty ImageId
echo “EC2 Image ID Response:”
$image_id
} catch {
$_
exit 1
}

#Generate the instance, with all environmental variables provided from Jenkins build.

try {
$instance_info = New-EC2Instance -ImageId $image_id -MinCount 1 -MaxCount 1 -KeyName $public_key -SecurityGroupId $SecurityGroup_Id -InstanceType $instance_type -Region $region
echo “Image generation response”
$instance_info
} catch {
$_
exit 1
}

#Let the user know things are working as intended and to please wait while we wait for the instance to reach the running state.

echo “Please wait for image to fully generate”
while($(Get-Ec2instance -instanceid $instance_info.instances.instanceid -region $region).Instances.State.Name.value -ne “running”) {
sleep 1
}

#Apply tags to the instance

echo “Naming Instance”
$tag = New-Object Amazon.EC2.Model.Tag
$tag.Key = “Name”
$tag.Value = “$instance_name”

New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region
echo “Tagging build information”
$tag.Key = “BuiltBy”
$tag.Value = “$builder”
New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region

$tag.Key = “BuildTag”
$tag.Value = “$BUILDTAG”

New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region

#Attach an elastic IP to the instance

try {
$ellastic_ip_allocation = New-EC2Address -Region $region
echo “Elastip IP registered:”
$ellastic_ip_allocation
} catch {
echo “ERROR: Registering Ec2Address”
$_
exit 1
#return $false
}

#Assign the elastic IP to the instance

try {
$response = Register-Ec2Address -instanceid $instance_info.instances.instanceid -AllocationID $ellastic_ip_allocation.allocationid -Region $region
echo “Register EC2Address Response:”
$response
} catch {
echo “ERROR: Associating EC2Address:”
$_
exit 1
}

#Send the elastic IP value to the EnvInj plugin:

$PublicIP = $ellastic_ip_allocation | select -expandproperty PublicIP
echo “Passing Env variable $PublicIP”
“ElasticIP = $PublicIP” | Out-file build.prop -Encoding ASCII -force

exit 0

In the next post I’ll cover prepping the AWS Environment, prepping the Jenkins Build and Key parts to look at to add customization for your environment.

If you want the whole project you can get it from my Github.

Part Two can be found here

Thanks for reading,

I_script_stuff

Jenkins EnvInject Plugin and Powershell

I have been working on larger projects in Jenkins. I hope to have a post put together in the next few weeks that shows better uses for multiple build steps. For now I thought I’d talk about a plugin I came across and some of the quirks you’ll run into when trying to do Powershell, Jenkins and the EnvInject plugin.

The plug-in allows you to create a file and define environment variables. These variables can persist between build steps and builds. It also allows you to override the default ENV variables Jenkins gets from the server. This can be used to enforce an environment over different Jenkin servers. For example if you need to modify the windows Path variable ($env:Path in jenkins) you can add a config file that makes the modification temporarily for the build rather than making a permanent change to the Windows OS. You can read more on the Environment Inject Plugin wiki.

For this post I’ll just use a few simple scripts to create a new Environment variable and hand it to a second build step. You could use a CSV file to do this as well, but I like the uniform cleanliness of using the Environment variables. It simplifies script re-use for me, your mileage of course may vary with your work flow.

 

First we will install the plugin. You’ll need to go to Manage Jenkins → Manage Plugins → Available

Filter for “Envinject” and install the plugin:

 

The first build we won’t be enabling the plugin. This is to show the default behavior of Jenkins to contrast the difference between the plug in behavior. Click New Item. I named the build “Env-Inject Example 1” and used a Free style project. Once created the build properties are as show:

The non-working code can be found here on pastebin.

The code simple outputs some text to so we can keep track of what is going on in from the console output.

Unsurprisingly we see a blank spot before Finished: SUCCESS where the $Keep_me variable didn’t make it to the second build step.

The Environment Inject Plugin requires a file to read from. We can define one anywhere and provide the path. For simplicity I’ll add the file to the workspace for the project. The workspace path can be seen on our console output above. Look for the line “Building in Workspace” for my configuration it is found on “C:\Program Files\Jenkins\workspace\Env-Inject Example1”.  Lets create a blank file on the Jenkins Server called build.prop

Once the file is in place go back to jenkins and we will add  “Inject environment Variables” to the build.

We will then move the step to the middle of the Jenkins build like so:

With Injected file at the top of the build we will need to change the code to write to the file. The code (showed above) can be found on pastebin here.

For the first build step the  the changes are at the end step:


echo "build 1"
$keep_me = "fubar"
echo "do more stuff"
$save_variable = "Keep_me=" + $keep_me
$save_variable |Out-file build.prop -Encoding ASCII

We output the variable with the environment tag we want. In this case I called it “Keep_me” just like the variable name. I then output to the build.prop file. Notice how I didn’t define a path. Since we put the original file in the workspace Jenkins checks there first for assets. If you wanted to output this to a different location, simply define the path to use. I also declared explicitly to encode the output as ASCII. Powershell defaults to UTF8 and Jenkins doesn’t like that encoding. So writing the code like:


echo "KEEP_ME=Fubar" >> build.prop

would fail since the encoding would be utf and Jenkins would miss interpret the result.

The second build step we declare the environmental variable and assign it a variable:

echo "build 2"
$keep_me =$env:Keep_me
echo "Did it make it"
echo "$Keep_me"

This is the same way you would declare any other environment variable available to Jenkins. For example the Path environmental variable found in Windows is $env:Path.
When we run the build we see that the console output now shows the expected output just above Finished: Success is now “fubar” like we declared.
If you go back to your build.prop file you will see that it has now been set with the environmental variable.
That is a persistent change. You can overwrite the variable so each build is different, but the file stays with the variables from the last build. You can also prep multiple overriding Environment variables from within the file.
Besides the persistent variables in the file a few other things you should know.
If the file is missing you will see errors like:
To help with troubleshooting and to keep track of what environmental variables were used with each build step a new option has been added to the Jenkins post build report:
The environmental variables used for the build are reported. As you can see the Keep_Me variable we used is shown:
How are you keeping track of variables across builds in Jenkins? Any other preferred methods? Let me know in the comments!
Thanks for reading.
I_Script_Stuff

A script to manage CloudFlare’s DNS within Active directory DNS

Using the Cloudflare API and Powershell it is possible to keep a windows 2012r2 DNS server the master of a cloudflare DNS zone. This works with all pricing tiers of cloudflare. I have seen a few solutions to keeping cloudflare synced with AD but none were based in powershell and several are only hosted on sourceforge which has recently gotten a dubious reputation. I understand that is being worked on/cleaned up. Though I’m sure many people can agree you can see the source code and read it, it is a better option. So this is what I put together.

This script is designed to run off of a domain controller/DNS server by default through a basic task scheduler configuration.

You can get a copy of both scripts offered here from my github OR  paste bin

A quick break down of the script:

Global configs:

Authentication/To-do list of domains:

$domain_to_sync_list = ("electric-horizons.com", "Example.com")

$email = "ReallyRealEmail@gmail.com"

$api_key = "THEAPIKEYIGOTFROMCLOUDFLARE!@"

Cloudflare’s API is pretty well documented and you can find your api key here and the email would simply be the email you registered with.

One last value is considered global.

The “strict” value:

$strict = $false

When this value is set to true the script will delete any DNS entries that do not match those found in Active Directory DNS.

The script is uses 5 functions and a foreach loop. A brief explanation of each function:

get-cfzoneid:

This function simply identifies the Zone id number cloudflare has assigned. Running it without arguments outputs the id of every Zone attached to your account in CloudFlare.

Create-cfdns:

When given a FQDN, Type (Cname, A, etc), an IP address/domain, and optionally the ID generated by get-cfzoneid. This function will generate a new DNS entry in CloudFlare.

Update-cfdns:

When given FQDN, an ip address, and optionally the ID. This will update an existing cloudflare DNS entry.

Delete-cFdns:

When given a FQDN and optionally the zone id, this command will without further prompting delete an existing cloudflare DNS entry. It will output what was deleted to the console.

Get-cfdnslist

This command simply outputs all DNS entries stored within Cloudflare when provided with the mandatory zone id.

The foreach loop(s):

The foreach loop is broken up into 2 main sections. With several steps discussed below: 

The primary section enumerates the active directory zone based on the $domain_to_sync_list. It only grabs the CNAME and the A name entries. You can do MX and any other entries you feel like simply by adding them to there Where-object check:

foreach($domain in $domain_to_sync_list) {
$dns_list = Get-DnsServerResourceRecord -ZoneName "$domain" | where {($_.RecordType -eq "A") -or ($_.RecordType -eq "CNAME")}

CloudFlare is very generous in they allow 1200 calls in 5 minutes or about 4 calls a second. I didn’t see a reason to abuse that generosity so I load the zone id for the domain we are working with at the beginning even though the update-cfdns, and Create-cfdns will look them up if not provided a value. Though the script isn’t particularly fast. In some instances you may need you add a sleep to the loop if the zone is large:

$id = get-cfzoneid $domain

A second Foreach loop handles the data we gathered from the earlier AD query. It utilizes a switch statement to select the procedure followed based on DNS entry type. As you can see due to how Powershell and the DNS function returns the object I used different calls. This is where you would add queries  for MX, TXT or SPF records. I made it easy to add them but they will need to be tested in your own environment.

 

Outside the second Foreach loop we are now checking the strict variable. If the strict variable is true we will be removing any entries that do not match.

if($strict) {
$cf_dns_list = get-cfdnslist $id
foreach($entry in $cf_dns_list.result.name) {
$replace_string = "." + $domain
$verify = $entry -Replace $replace_string
if($dns_list.Hostname -notcontains "$verify") {
delete-CFdns $entry $id
}
}
}

This plus a few brackets completes the cycle of the script.

Thanks for reading.

As a bonus:

Dynamic dns script

 Those same 5 functions can be used to create your own free dynamic DNS service. You won’t need active directory DNS for this. By using another service such as api.ipify.org to get the external IP you can point an ip to your house.  The script can be found on my github here, or on pastebin here.

The modifying these variables at the top is all that is needed:

List of FQDNs you want to update with your dynamic entry:

$domain_list = ("fqdn for dynamic update")

The email and API key described earlier.

$email = "<email from cloudflare>"
$api_key = "<api key from cloudflare>"

Thanks for reading,
I-Script-Stuff

Migrating Powershell Scheduled Tasks from Windows Task Scheduler to Jenkins

Why migrate to Jenkins?

As with all technology it has to do with your work flow. Jenkins provides a nice generic dashboard that is easy to monitor, allows me to give out access by project, and integrates nicely into SVN and GIT. My general goal is to give the developers more control over the environment (Expand drives, Spin up virtual machines, migrate virtual machines to different hosts, run maintenance tasks, etc.)

If you have never used Jenkins or want to know how to set it up in windows: Matthew Hodgkins’ blog wrote a really great 2 part blog entry on getting started with Jenkins and Powershell. So I won’t cover the basic configuration as his blog entry covers all of the basics and then some. I will assume you have at least finished part 1 of the blog. That way I can jump straight to how to start migrating your scheduled tasks to Jenkins. I am going to cover script error handling gotchas and how to exit scripts with Jenkins. That won’t be schedule task specific but a something  you should be aware of as you migrate your scripts to the new platform, and may be a style change for Jenkins scripts.

Creating a Scheduled Task:

For this example create a scheduled task:

First Select a “New Item” to create a new project:

newitem1

We will be doing a “FreeStyle project” that I am calling “Run every 5 minutes”:

run5minutes2

At this time we will be making an extremely simple scheduled task. That means we can ignore most of the options scrolling down to “Build Triggers” and selecting “Build periodically”.

In the text box we will be entering:

H/5 * * * *

build_5_complete3

The scheduling is built around the cronjob timing system. Clicking the question mark offers an excellent drop down to give a full explanation the scheduling of Jenkin Jobs. For now it is enough to know we scheduled the task to run every 5 minutes.

A quick note: The system doesn’t evaluate your scheduled task schedule until you click off of the text window. So it won’t show the run explanation until you click away.

For this simple task we will be outputting text to a file:

echo “write a line” >> C:\temp\write_to_file.txt

powershell_scheduled_task4

 

Since our script outputs to a local directory. We’ll need to add the C:\temp folder on the Jenkins server. For now make sure that the NTFS permissions are set so “Everyone” can write to the folder:

temp

Hit save in Jenkins. Return to the Dashboard by using the navigation bar on the right:

nav_dash

 

You can manually launch the job, or wait the 5 minutes and confirm the script worked.

To manually launch:

manuallaunch

 

Now that we have finished the scheduled task. Lets talk about Jenkins error handling for a second.

Jenkins Error Handling Gotchas

Go back to the NTFS permissions on the C:\temp folder and Deny Everyone and hit apply:

temp

Wait for or force the scheduled task to run. After it runs the task succeeded:

success Lets take a look at what the scripts output via the terminal. Click on the “Run Every 5 Minutes” link and look for where it says “Last build” Under Permalinks. Click on “Last Build”

permalinks

The terminal output shows the error:

terminal_output

The issue has to do with how Jenkins detects the script exit. A success is read as:

Exit 0

A failure:

Exit 1

This means anywhere you would return a True or False value to exit your script. Such as:

try {

echo “Folder is blocked so I should return false” >> C:\temp\write_to_file.txt
return $true

} catch {

return $false

}

You would need to use an exit code.

Enter your code as:

Try {
echo “write a line” >> C:\temp\write_to_file.txt
exit 0
} catch {
exit 1
}

Leads to the expected failure message:
scheduled_failure

After reverting the c:\temp NTFS permissions to allow everyone:

scheduled_success

 

Thanks for reading.

Project Honeypot API and Powershell

This week I decided to do some work with the Project Honey Pot http:bl API. I have been using this mostly as part of other scripts used to gather information from logs.This API was rather interesting as it is a DNS query rather than the normal URL/HTTP methods I have been working with.

“Project Honey Pot is a web-based honeypot network, which uses software embedded in web sites to collect information about IP addresses used when harvesting e-mail addresses for spam or other similar purposes such as bulk mailing and e-mail fraud. The project also solicits the donation of unused MX entries from domain owners.” -Wikipedia

I would suggest reading the Terms of Service if you plan on using this API for any production systems, or any kind of dynamic blocking:

The code can be found on Pastebin here

Or on my Github here

#https://www.projecthoneypot.org/faq.php
function Get-projecthoneypot() {
#https://www.projecthoneypot.org/terms_of_service_use.php
Param(
[Parameter(Mandatory = $true)][string]$ip,
[AllowEmptyString()]$api_key="<YOUR API KEY HERE>"
)
$ip_arr = $ip.split(".")
[array]::Reverse($ip_arr)
$ip = $ip_arr -join(".")
$query = $api_key+ "." + "$ip" + ".dnsbl.httpbl.org"
try {
$response = [System.Net.Dns]::GetHostAddresses("$query") | select -expandproperty IPAddressToString
} catch {
return $false
}
$decode = $response.split(".")
if($decode[0] -eq "127") {
$days_since_last_seen = $decode[1]
$threat_score = $decode[2]
switch ($decode[3]){
0 { $meaning = "Search Engine"}
1 { $meaning = "Suspicious"}
2 { $meaning = "Harvester"}
3 { $meaning = "Suspicious & Harvester"}
4 { $meaning = "Comment Spammer"}
5 { $meaning = "Suspicious & Comment Spammer"}
6 { $meaning = "Harvester & Comment Spammer"}
7 { $meaning = "Suspicious & Harvester & Comment Spammer"}
default {$meaning = "Unknown"}
}
$return_obj = [PSCustomObject] @{
last_seen = $days_since_last_seen
threat_score = $threat_score
meaning = $meaning
}
return $return_obj

} else {
return “Illegal response”
}

}

To break down the function a little bit to the interesting tid bits there is the DNS query:

$ip_arr = $ip.split(".")
[array]::Reverse($ip_arr)
$ip = $ip_arr -join(".")
$query = $api_key+ "." + "$ip" + ".dnsbl.httpbl.org"
try {
$response = [System.Net.Dns]::GetHostAddresses("$query") | select -expandproperty IPAddressToString
} catch {
return $false
}

The api wants each bit of information broken down as a subnet. First I break the ipv4 address into an array, reverse the order. So 192.168.0.1 becomes 1.0.168.192.
We then combine the api key to the front of the query. So if your apikey is 12345abcdef your dns query looks like 123456abcdef.1.0.168.192 Then to hit the correct the dns servers we add the FQDN: 123456abcdef.1.0.168.192.dnsbl.httpbl.org the dns server will then respond with an IP.

The response from the honeypot API DNS servers is the form of an Ip address. Always starting with 127 we can confirm the query is correct and then move on to the next octet:

$decode = $response.split(".")
if($decode[0] -eq "127") {
$days_since_last_seen = $decode[1]
$threat_score = $decode[2]
switch ($decode[3]){
0 { $meaning = "Search Engine"}
1 { $meaning = "Suspicious"}
2 { $meaning = "Harvester"}
3 { $meaning = "Suspicious & Harvester"}
4 { $meaning = "Comment Spammer"}
5 { $meaning = "Suspicious & Comment Spammer"}
6 { $meaning = "Harvester & Comment Spammer"}
7 { $meaning = "Suspicious & Harvester & Comment Spammer"}
default {$meaning = "Unknown"}
}

The second octet is the number of days since a qualifying incident caused the ip to be logged.
The third octet is the score the system gives the ip as a threat level. The level goes from 0 to 255. 255 being the highest threat. The system qualifies threat by a variety of methods.
The final octet is the behavior the project spotted from the IP.

That is all for this week. Thanks for reading.

Using Malwaredomains.com DNS Black hole list with Windows 2012 DNS and Powershell

Malwaredomains.com is a project that is constantly adding known malware domains to a giant list.
They have directions for adding there zones files to a windows server but they even describe it as a bit of a work around. They link to a powershell script that uses wmi. Well, I hadn’t worked with the windows 2012 Powershell DNS commands so I threw together a quick little script to handle linking to the malwaredomains.com list using the native commands for windows 2012.

The script pulls and parses the full .txt file www.malwaredomains.com keeps. The Primary Zone is as a non Active directory integrated entry. This will keep it from flooding your active directory and replication with entries. If you choose to add this script I would recommend you place it only on the domain controllers that your users are likely to query for DNS. An example would be if you have two active directory domain controllers to handle an office’s DNS and two domain controllers for FSMO roles and to serve the Datacenter the script should run on the office Domain controllers.

This script can be found on pastebin
And all three of these scripts can be found on my github.
Customize the top variables for your environment, the rest of the script should be self handling:

$tmp_file_holder = "current_list.bk"

$rollback_path = “C:\scripts\current_roll_back.list”
$rollback_date = get-date -format “M_dd_yyyy”
$rollback_backup_file = $rollback_path + “rollback_” + $rollback_date + “.bat”

move-item $rollback_path $rollback_backup_file -force

$domain_list = invoke-webrequest http://mirror1.malwaredomains.com/files/domains.txt | select -expandproperty content
$domain_list -replace “`t”, “;” -replace “;;” > $tmp_file_holder
$domain_content = get-content $tmp_file_holder

$zone_list = get-dnsserverzone | where {$_.IsDsIntegrated -eq $false} | select -expandproperty Zonename

foreach($line in $domain_content){
if(-not($line | select-string “##”)) {
$line_tmp = $line -split “;”
$line = $line_tmp[0]
if($zone_list -notcontains $line) {
Add-DnsServerPrimaryZone “$line” -DynamicUpdate “none” -ZoneFile “$line.dns”
echo “$line” | Out-File -FilePath $rollback_file -Encoding ascii -append
sleep 1
}
}
}

The Malwaredomains.com team often takes down sites off the list that were temporarily added, or wrongfully added. I created a roll back script.
This script can be found on pastebin
And all three of these scripts can be found on my github.
Make sure to modify the top variable to fit your environment and match the original script:

$rollback_path = "C:\scripts\current_roll_back.list"

$domain_content = get-content $rollback_path
$zone_list = get-dnsserverzone | where {$_.IsDsIntegrated -eq $false} | select -expandproperty Zonename

foreach($line in $domain_content) {
if($zone_list -contains $line) {
Remove-DnsServerZone “$line”
sleep 1
}
}

I would suggest setting up 2 scheduled tasks.
One to run weekly adding new domains to the list and keeping the list up to date.
The second task would run the roll back. Clearing out wrongly marked domains, etc. Though how you choose to manage it is up to you.

Since I was messing around with the files anyway I also made a host file generator. Host files are not really a preferable method as there have been reports of large host files slowing down browsing and the like. That said I do use a version of this script on my personal computers and haven’t seen an issue. Malwaredomains.com doesn’t offer host files, but they offer a list of some great host files. That limits the use of the bellow script, but I do like adding my own hostfile entries to the top of the script and run it as a scheduled task once a week.
This can be found on pastebin
And all of the code can be found on my github

Change the variables at the top to fit your needs. I even made it easy to setup a reroute ip to a branded warning for businesses. I would point out that this doesn’t protect against sub domains and the like.


$host_file_path = "C:\windows\system32\drivers\etc\hosts_tmp"
$final_loc = "C:\windows\system32\drivers\etc\hosts"
$tmp_file_holder = ".\current_list.bk"
$reroute = "127.0.0.1"

$Host_File_header = “# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a ‘#’ symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host

# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost”

echo “$Host_File_header” > $host_file_path

$domain_list = invoke-webrequest http://mirror1.malwaredomains.com/files/domains.txt | select -expandproperty content
$domain_list -replace “`t”, “;” -replace “;;” > $tmp_file_holder
$domain_content = get-content $tmp_file_holder
foreach($line in $domain_content){
if(-not($line | select-string “##”)) {
$line_tmp = $line -split “;”
$line = $line_tmp[0]
echo “$reroute $line” >> $host_file_path
}
}

move-item $host_file_path $final_loc -force

All the code can be found on my github.com

Thanks for reading.

Using Powershell to notify when an email is involved in a data breach.

This week I worked with the Have I been Pwned API. I came up with a pretty use full little script that monitors Email addresses and notifies you if one of them is signed up for a compromised service. Have I been Pwned offers a service for this Here. Which is nice for individual accounts but if your at a business with hundreds of employees you don’t want to be adding accounts manually or sometimes you want to be emailed if someones account is on a pwned. That is where these scripts come in.

The scripts can be found:
Monitor script designed to work with AD can be found on pastebin here.
Monitor script using an array of emails can be found on pastebin here.
Additional functions made from this project can be found on pastebin here.
And as always My github has the full collection.

There are two versions of the monitor script, one with an array you can configure the email addresses manually. The other that pulls directly from active directory. A note: the monitor scripts do not care about the age of the breach. If haveibeenpwned.com gets information on a new breach that happened in 2001 and a users email over laps the user will be notified. After a breach has been identified it is logged and the user isn’t bothered again. The script also “stacks” breaches into one email so as not to spam your users with 100’s of emails.

An email for multiple breaches looks like:

1

This email is customize-able in the configuration section of the script.

Other quick notes on the use of the script before I go into configuration details. I would suggest not running the script more than once a month, or once a week at the most. The breaches can be old at times and constantly hammering the API will not do any good. There is also a sleep 5 in the script. Feel free to adjust it, I left it in to make sure larger accounts wouldn’t constantly query the API causing issues.

Configuration options for these scripts:

#Make sure the path exists or you will spam your list every time the script runs:
$path_to_notified_file = ".\db\pwnd_list.csv"

This is the database file that keeps the script from spamming your users. Make sure it is correct and writable or your users will be notified repeatedly.

Do you even want to send an email? With $email_notify set to $false the script just generates a CSV file. This lets you build a basic database of old breaches without annoying your users OR determining how many user emails have been involved in breaches.

#SMTP settings:
$email_notify = $true

Customize the Email alert the users will get:

$from = "test@example.com"
$subject = "ATTN: Account was included in a data breach"
$body_html = "Hello,
It has been noticed by an automated system that your email address was included in the following data breaches:"
$body_signature = "
It is recommended you change your passwords on those systems

Thank you
I_script_stuff Notifier Bot

#email credentials enable tested on gmail. If you don’t need credentials set $needs_email_creds to false.
$needs_email_creds = $false
#configure credential file for email password if needed:
$creds_path = “.\cred.txt”
#read-host -assecurestring | convertfrom-securestring | out-file $creds_path

The $needs_email_creds option needs you to setup a password if set to $true. This works on gmail but I haven’t tested it on other systems.
First load the $cred_path variable and then copy and paste the read-host line without the comment like so:

2
The script doesn’t prompt for anything. Just type your password for the email account and press enter. The password will be stored in the file.

Last bit you need to configure is SMTP server settings:

#SMTP server to use
$smtp = "smtp.gmail.com"
$smtp_port = "587"

Configured for google, you’ll need to know your own SMTP server settings.Monitor script designed to work with AD can be found on pastebin here.
Monitor script using an array of emails can be found on pastebin here.
Now your all set to monitor your corporate environment for breaches involving services your users may have signed up for on there email.

 

Additional functions made from this project can be found on pastebin here.


get-breachedstatus:


function get-breachedstatus() {
Param(
[Parameter(Mandatory = $true)][string]$email,
[AllowEmptyString()]$brief_report="$true"
)

try{
if($brief_report) {
$url = “https://haveibeenpwned.com/api/v2/breachedaccount/” + $email + “?truncateresponse=true”
} else {
$url = “https://haveibeenpwned.com/api/v2/breachedaccount/” + $email
}
$result = invoke-restmethod “$url” -UserAgent “I_script_stuff checker 0.01”
return $result
} catch {
return $false
}
}

This function is what powers the notified script. In the script it does a “truncated response” you can get some interesting information from a none truncated response:
Command:
Get-breachedstatus test@example.com

get-pastestatus:
This searches the API for paste breaches and provides the dump. There is no truncated response it just is the response:

function get-pastestatus() {
Param(
[Parameter(Mandatory = $true)][string]$email
)
try{
$url = "https://haveibeenpwned.com/api/v2/pasteaccount/" + $email
$result = invoke-restmethod $url -UserAgent "I_script_stuff checker 0.01"
return $result
} catch {
return $false
}
}

Get all breaches dumps the full database of breaches in case you want to cache them:

function get-allbreaches() {
try{
$url = "https://haveibeenpwned.com/api/v2/breaches"

$result = invoke-restmethod “$url” -UserAgent “I_script_stuff checker 0.01”
return $result
} catch {
return $false
}
}

Get Domain Stauts queries a specific domain for a breach:

function get-domainstatus() {
Param(
[Parameter(Mandatory = $true)][string]$domain,
)
try{
$url = "https://haveibeenpwned.com/api/v2/breach/" + $domain
$result = invoke-restmethod $url -UserAgent "I_script_stuff checker 0.01"
return $result
} catch {
return $false
}
}

You can use these functions to get started with any larger projects. Please skim the api documentation for fair use rules. Though mostly it is don’t do evil.

That is it for this week.  Thanks.