Jenkins, Powershell, AWS and Cloudflare Automated Deployment. (Part3 Cloudflare)

If you’re interested in only the code, export of the Jenkins template and list of needed plugins it can be found on my Github. I have made some changes since part 2 section was written. I would suggest updating if you’re using an old version of the script.

This is Part 3 of my series on deploying an EC2 instance and Integrating Cloudflare using Powershell and Jenkins. You can find my previous posts here Part 1 and Part 2.

In Part 2 section I walked you through the creating the Jenkins build and populating the Parameters, and at the end you should have been able to launch a successful EC2 instance. This section I will cover Injecting the Environmental variables into a second build step and utilizing the cloudflare API to create a non-proxied DNS entry. Non-proxied DNS entries act as standard DNS entries and are not affected by the cloud-bleed issue from a few weeks ago.

This post assumes you have a configured cloudflare account for the domain you want to auto deploy.  You will also need the Email and API key for your cloudflare account.

In your Jenkins instance go to the build that was created in Part 2 for launching EC2 instance. Scroll to the domain parameter and make sure you enter domains that match what is available in cloudflare, and that you want available for deployment.

Next  we need to add a new build step. Select “Inject environment Variables” and name the Properties file path “Build.prop” Make sure the build step is after the Powershell EC2 launch:

 

 


Connect to the console of your Jenkins Server, and go to the workspace. Normally that is C:\Program Files\Jenkins\workspace\<build name>. You can
Go back to one of your previous test builds and find the workspace path under the console output to verify.

Create a Blank file called build.prop


I go into more detail about the environment inject variable here

Returning back to the Jenkins web control panel create another Powershell build job and add the build2_Cloudflare.ps1 contents to jenkins

At the top of the script configure your API information:

Where the $email and $api_key variables come directly from cloudflare.

You can also decide on the default behavior on IP collisions within cloudflare. If a subdomain already exists in cloudflare this decides if the script should overwrite the IP. If the ip isn’t created the console will output the IP that was assigned to the new instance so it is reachable. If the variable $overwriteip is set to $true the script will attempt to update the domain, and if that fails try to create the domain. If it is set to $false then only the attempt to create the domain will be tried. If a failure registers the Elastic IP is output in the console so the user can still contact the new instance.

No other configuration is needed for the cloudflare segment. The flow simply grabs the domain and the AWS instance name and uses that to generate the FQDN. The Elastic IP injected into the file from the first buildstep like so:

$PublicIP = $ellastic_ip_allocation | select -expandproperty PublicIP
echo "Passing Env variable $PublicIP"
"ElasticIP = $PublicIP" | Out-file build.prop -Encoding ASCII -force

The second buildstep picks it up injected variable like any other variable:

cd $env:WORKSPACE
$EIP = $env:ElasticIP
$domain_partial = $ENV:Domain
$domain_Instance_Name = $ENV:Instance_Name
$domain_FQDN = $domain_Instance_Name + “.” + $domain_partial

For the cloudflare script and code  I go into a lot more detail on the functions on this post here.

I hope this blog series provided inspiration on how to build out a DevOPs friendly deployment. Something that allows the Development team the freedom to deploy servers, but allows for a consistent and secure environment that every Operations team needs to thrive.

Thanks for reading,
I_Script_Stuff

Jenkins, Powershell, AWS and Cloudflare Automated Deployment.

If your interested in only the code, and export of the Jenkins template and list of needed plugins it can be found on my Github

Introduction:

This multipart series is about using Jenkins to spin up an EC2 instance, add an elastic IP, and deploying the DNS to cloudflare using powershell. This is meant to be a template configuration for Windows DevOps teams to start building out environments. From here you can build your own AMI, or tie in your own github/svn code deploy or use ansible/chef/puppet/powershell/million other options to complete an automated deployment.

Some advantages to this method:

The user doing the build will be able to pick an instance, pick a security group, and deploy into multiple regions while Jenkins 100% controls the options available. This insures consistency across builds and allows you to limit who requires credentials to critical infrastructure. We can also apply tags and enforce mandatory tagging. This method also utilizes the concept of “only right answers.”  Though the user is given options, those options are limited within jenkins, and those limitations make sense for the project. This project is a template that will let you have confidence that tomorrow morning you will not wake up to a dozen d2.8xlarge with 5tbs of ESD storage to act as a cluster of DNS servers.

I have written posts about many of the tools used for this series ( AWS Report using Powershell ,Managing Cloudflare with Activedirectory, Jenkins EnvInject Plugin, Migrating Powershell Scheduled tasks to Jenkins). So if you want some other posts that cover the getting started concepts I’d recommend those.  I will note that  Matthew Hodgkins’ wrote a really great 2 part blog entry on getting started with Jenkins and Powershell, and the AWS plugin getting started  docs are a great resources if you are trying to do this project from scratch.

Plugins and Configuration:

For this project to work there are a few plugins you will absolutely need:

Jenkins (latest version)

Powershell 4.0

Aws Powershell plugin : Used to integrate the powershell script and AWS.

Environment Injector Plugin : Used to keep some variables between Jenkins build steps

User build vars plugin : Used to get data about who triggered the Jenkins build.

Role-based Authorization Strategy (Optional) : This can be used to limit which builds a user has access to. I’ll cover this in a later post.

 

Additional configuration notes:

I have Jenkins running as a service account. I did this due to wanting to run powershell plugins, and a few times I have run into odd behavior without a full user space provided. I havn’t tested to see if this works without it.

Workflow:

User has connected to Jenkins and found the build the want.

When initializing the build they encounter configuration options:

They Click build and Jenkins launches the build. The user waits a few minutes and on the AWS console, a new EC2 instance has spawned with Tags:

The Name has been Tagged “WebDeploy”, and two new tags have been created. The BuildTag that shows the Jenkins job that launched the instance. In this case since we have 256 Unicode characters to work with in aws and 200+ in Jenkins I was able to name the Build after the environment it was launched in (Prod or production), and a Billing code (Billcode0001), the last -2 is which Jenkins build the deploy is from.  The other tag is BuiltBy which is the user account in Jenkins that triggered the build.

After the AWS instance is fully online Powershell does API calls back to Cloudflare. A new entry is created, or an old entry is updated. Webdeploy is given the FQDN of webdeploy.electric-horizons.com:

Under the Hood:


#Import AWS powershell module
import-module awspowershell

#Enforce working in our current Jenkins workspace.
cd $env:WORKSPACE

#
#ENV:AWS_Profile is from the build parameters earlier it provides the AWS profile credentials
#

$aws_profile = $ENV:AWS_Profile
Set-AWSCredentials -ProfileName $aws_profile

#Load all the other environment variables AWS needs to to create an instance

$region = $ENV:Region
$instance_name = $ENV:Instance_Name
$builder = $ENV:BUILD_USER
$buildtag = $ENV:BUILD_TAG
$image_type = $ENV:image_type
$Instance_Type = $ENV:Instance_Type
$domain = $ENV:Domain
$public_key = $ENV:Public_Key
$SecurityGroup = $ENV:Security_Group

#Search for the Security group Name tag Value. More on this in the next post.

try {
$SecurityGroup_Id = Get-EC2SecurityGroup -Region “$region” | where { $_.Groupname -eq “$SecurityGroup” } | select -expandproperty GroupId
echo “Security group Identification response:”
$SecurityGroup_Id

} catch {
$_
exit 1
}

#Make sure that the Instance name is not blank.

if($instance_name.length -le 1) {
echo “ERROR: Instance must be named and the length must be greater than 1.”
echo “ERROR: Instance name: $instance_name”
echo “ERROR: Instance name length” $instance_name.length
exit 1
}

#Select AWS AMI. This is limited to the ones owned by Amazon. And gets the most up to date image.

try {
$image_id = Get-EC2Image -Owner amazon, self -Region $region | where { $_.Description -eq $image_type } | select -first 1 -expandproperty ImageId
echo “EC2 Image ID Response:”
$image_id
} catch {
$_
exit 1
}

#Generate the instance, with all environmental variables provided from Jenkins build.

try {
$instance_info = New-EC2Instance -ImageId $image_id -MinCount 1 -MaxCount 1 -KeyName $public_key -SecurityGroupId $SecurityGroup_Id -InstanceType $instance_type -Region $region
echo “Image generation response”
$instance_info
} catch {
$_
exit 1
}

#Let the user know things are working as intended and to please wait while we wait for the instance to reach the running state.

echo “Please wait for image to fully generate”
while($(Get-Ec2instance -instanceid $instance_info.instances.instanceid -region $region).Instances.State.Name.value -ne “running”) {
sleep 1
}

#Apply tags to the instance

echo “Naming Instance”
$tag = New-Object Amazon.EC2.Model.Tag
$tag.Key = “Name”
$tag.Value = “$instance_name”

New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region
echo “Tagging build information”
$tag.Key = “BuiltBy”
$tag.Value = “$builder”
New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region

$tag.Key = “BuildTag”
$tag.Value = “$BUILDTAG”

New-EC2Tag -Resource $instance_info.instances.instanceid -Tag $tag -Region $region

#Attach an elastic IP to the instance

try {
$ellastic_ip_allocation = New-EC2Address -Region $region
echo “Elastip IP registered:”
$ellastic_ip_allocation
} catch {
echo “ERROR: Registering Ec2Address”
$_
exit 1
#return $false
}

#Assign the elastic IP to the instance

try {
$response = Register-Ec2Address -instanceid $instance_info.instances.instanceid -AllocationID $ellastic_ip_allocation.allocationid -Region $region
echo “Register EC2Address Response:”
$response
} catch {
echo “ERROR: Associating EC2Address:”
$_
exit 1
}

#Send the elastic IP value to the EnvInj plugin:

$PublicIP = $ellastic_ip_allocation | select -expandproperty PublicIP
echo “Passing Env variable $PublicIP”
“ElasticIP = $PublicIP” | Out-file build.prop -Encoding ASCII -force

exit 0

In the next post I’ll cover prepping the AWS Environment, prepping the Jenkins Build and Key parts to look at to add customization for your environment.

If you want the whole project you can get it from my Github.

Part Two can be found here

Thanks for reading,

I_script_stuff

A script to manage CloudFlare’s DNS within Active directory DNS

Using the Cloudflare API and Powershell it is possible to keep a windows 2012r2 DNS server the master of a cloudflare DNS zone. This works with all pricing tiers of cloudflare. I have seen a few solutions to keeping cloudflare synced with AD but none were based in powershell and several are only hosted on sourceforge which has recently gotten a dubious reputation. I understand that is being worked on/cleaned up. Though I’m sure many people can agree you can see the source code and read it, it is a better option. So this is what I put together.

This script is designed to run off of a domain controller/DNS server by default through a basic task scheduler configuration.

You can get a copy of both scripts offered here from my github OR  paste bin

A quick break down of the script:

Global configs:

Authentication/To-do list of domains:

$domain_to_sync_list = ("electric-horizons.com", "Example.com")

$email = "ReallyRealEmail@gmail.com"

$api_key = "THEAPIKEYIGOTFROMCLOUDFLARE!@"

Cloudflare’s API is pretty well documented and you can find your api key here and the email would simply be the email you registered with.

One last value is considered global.

The “strict” value:

$strict = $false

When this value is set to true the script will delete any DNS entries that do not match those found in Active Directory DNS.

The script is uses 5 functions and a foreach loop. A brief explanation of each function:

get-cfzoneid:

This function simply identifies the Zone id number cloudflare has assigned. Running it without arguments outputs the id of every Zone attached to your account in CloudFlare.

Create-cfdns:

When given a FQDN, Type (Cname, A, etc), an IP address/domain, and optionally the ID generated by get-cfzoneid. This function will generate a new DNS entry in CloudFlare.

Update-cfdns:

When given FQDN, an ip address, and optionally the ID. This will update an existing cloudflare DNS entry.

Delete-cFdns:

When given a FQDN and optionally the zone id, this command will without further prompting delete an existing cloudflare DNS entry. It will output what was deleted to the console.

Get-cfdnslist

This command simply outputs all DNS entries stored within Cloudflare when provided with the mandatory zone id.

The foreach loop(s):

The foreach loop is broken up into 2 main sections. With several steps discussed below: 

The primary section enumerates the active directory zone based on the $domain_to_sync_list. It only grabs the CNAME and the A name entries. You can do MX and any other entries you feel like simply by adding them to there Where-object check:

foreach($domain in $domain_to_sync_list) {
$dns_list = Get-DnsServerResourceRecord -ZoneName "$domain" | where {($_.RecordType -eq "A") -or ($_.RecordType -eq "CNAME")}

CloudFlare is very generous in they allow 1200 calls in 5 minutes or about 4 calls a second. I didn’t see a reason to abuse that generosity so I load the zone id for the domain we are working with at the beginning even though the update-cfdns, and Create-cfdns will look them up if not provided a value. Though the script isn’t particularly fast. In some instances you may need you add a sleep to the loop if the zone is large:

$id = get-cfzoneid $domain

A second Foreach loop handles the data we gathered from the earlier AD query. It utilizes a switch statement to select the procedure followed based on DNS entry type. As you can see due to how Powershell and the DNS function returns the object I used different calls. This is where you would add queries  for MX, TXT or SPF records. I made it easy to add them but they will need to be tested in your own environment.

 

Outside the second Foreach loop we are now checking the strict variable. If the strict variable is true we will be removing any entries that do not match.

if($strict) {
$cf_dns_list = get-cfdnslist $id
foreach($entry in $cf_dns_list.result.name) {
$replace_string = "." + $domain
$verify = $entry -Replace $replace_string
if($dns_list.Hostname -notcontains "$verify") {
delete-CFdns $entry $id
}
}
}

This plus a few brackets completes the cycle of the script.

Thanks for reading.

As a bonus:

Dynamic dns script

 Those same 5 functions can be used to create your own free dynamic DNS service. You won’t need active directory DNS for this. By using another service such as api.ipify.org to get the external IP you can point an ip to your house.  The script can be found on my github here, or on pastebin here.

The modifying these variables at the top is all that is needed:

List of FQDNs you want to update with your dynamic entry:

$domain_list = ("fqdn for dynamic update")

The email and API key described earlier.

$email = "<email from cloudflare>"
$api_key = "<api key from cloudflare>"

Thanks for reading,
I-Script-Stuff

Project Honeypot API and Powershell

This week I decided to do some work with the Project Honey Pot http:bl API. I have been using this mostly as part of other scripts used to gather information from logs.This API was rather interesting as it is a DNS query rather than the normal URL/HTTP methods I have been working with.

“Project Honey Pot is a web-based honeypot network, which uses software embedded in web sites to collect information about IP addresses used when harvesting e-mail addresses for spam or other similar purposes such as bulk mailing and e-mail fraud. The project also solicits the donation of unused MX entries from domain owners.” -Wikipedia

I would suggest reading the Terms of Service if you plan on using this API for any production systems, or any kind of dynamic blocking:

The code can be found on Pastebin here

Or on my Github here

#https://www.projecthoneypot.org/faq.php
function Get-projecthoneypot() {
#https://www.projecthoneypot.org/terms_of_service_use.php
Param(
[Parameter(Mandatory = $true)][string]$ip,
[AllowEmptyString()]$api_key="<YOUR API KEY HERE>"
)
$ip_arr = $ip.split(".")
[array]::Reverse($ip_arr)
$ip = $ip_arr -join(".")
$query = $api_key+ "." + "$ip" + ".dnsbl.httpbl.org"
try {
$response = [System.Net.Dns]::GetHostAddresses("$query") | select -expandproperty IPAddressToString
} catch {
return $false
}
$decode = $response.split(".")
if($decode[0] -eq "127") {
$days_since_last_seen = $decode[1]
$threat_score = $decode[2]
switch ($decode[3]){
0 { $meaning = "Search Engine"}
1 { $meaning = "Suspicious"}
2 { $meaning = "Harvester"}
3 { $meaning = "Suspicious & Harvester"}
4 { $meaning = "Comment Spammer"}
5 { $meaning = "Suspicious & Comment Spammer"}
6 { $meaning = "Harvester & Comment Spammer"}
7 { $meaning = "Suspicious & Harvester & Comment Spammer"}
default {$meaning = "Unknown"}
}
$return_obj = [PSCustomObject] @{
last_seen = $days_since_last_seen
threat_score = $threat_score
meaning = $meaning
}
return $return_obj

} else {
return “Illegal response”
}

}

To break down the function a little bit to the interesting tid bits there is the DNS query:

$ip_arr = $ip.split(".")
[array]::Reverse($ip_arr)
$ip = $ip_arr -join(".")
$query = $api_key+ "." + "$ip" + ".dnsbl.httpbl.org"
try {
$response = [System.Net.Dns]::GetHostAddresses("$query") | select -expandproperty IPAddressToString
} catch {
return $false
}

The api wants each bit of information broken down as a subnet. First I break the ipv4 address into an array, reverse the order. So 192.168.0.1 becomes 1.0.168.192.
We then combine the api key to the front of the query. So if your apikey is 12345abcdef your dns query looks like 123456abcdef.1.0.168.192 Then to hit the correct the dns servers we add the FQDN: 123456abcdef.1.0.168.192.dnsbl.httpbl.org the dns server will then respond with an IP.

The response from the honeypot API DNS servers is the form of an Ip address. Always starting with 127 we can confirm the query is correct and then move on to the next octet:

$decode = $response.split(".")
if($decode[0] -eq "127") {
$days_since_last_seen = $decode[1]
$threat_score = $decode[2]
switch ($decode[3]){
0 { $meaning = "Search Engine"}
1 { $meaning = "Suspicious"}
2 { $meaning = "Harvester"}
3 { $meaning = "Suspicious & Harvester"}
4 { $meaning = "Comment Spammer"}
5 { $meaning = "Suspicious & Comment Spammer"}
6 { $meaning = "Harvester & Comment Spammer"}
7 { $meaning = "Suspicious & Harvester & Comment Spammer"}
default {$meaning = "Unknown"}
}

The second octet is the number of days since a qualifying incident caused the ip to be logged.
The third octet is the score the system gives the ip as a threat level. The level goes from 0 to 255. 255 being the highest threat. The system qualifies threat by a variety of methods.
The final octet is the behavior the project spotted from the IP.

That is all for this week. Thanks for reading.

Using Malwaredomains.com DNS Black hole list with Windows 2012 DNS and Powershell

Malwaredomains.com is a project that is constantly adding known malware domains to a giant list.
They have directions for adding there zones files to a windows server but they even describe it as a bit of a work around. They link to a powershell script that uses wmi. Well, I hadn’t worked with the windows 2012 Powershell DNS commands so I threw together a quick little script to handle linking to the malwaredomains.com list using the native commands for windows 2012.

The script pulls and parses the full .txt file www.malwaredomains.com keeps. The Primary Zone is as a non Active directory integrated entry. This will keep it from flooding your active directory and replication with entries. If you choose to add this script I would recommend you place it only on the domain controllers that your users are likely to query for DNS. An example would be if you have two active directory domain controllers to handle an office’s DNS and two domain controllers for FSMO roles and to serve the Datacenter the script should run on the office Domain controllers.

This script can be found on pastebin
And all three of these scripts can be found on my github.
Customize the top variables for your environment, the rest of the script should be self handling:

$tmp_file_holder = "current_list.bk"

$rollback_path = “C:\scripts\current_roll_back.list”
$rollback_date = get-date -format “M_dd_yyyy”
$rollback_backup_file = $rollback_path + “rollback_” + $rollback_date + “.bat”

move-item $rollback_path $rollback_backup_file -force

$domain_list = invoke-webrequest http://mirror1.malwaredomains.com/files/domains.txt | select -expandproperty content
$domain_list -replace “`t”, “;” -replace “;;” > $tmp_file_holder
$domain_content = get-content $tmp_file_holder

$zone_list = get-dnsserverzone | where {$_.IsDsIntegrated -eq $false} | select -expandproperty Zonename

foreach($line in $domain_content){
if(-not($line | select-string “##”)) {
$line_tmp = $line -split “;”
$line = $line_tmp[0]
if($zone_list -notcontains $line) {
Add-DnsServerPrimaryZone “$line” -DynamicUpdate “none” -ZoneFile “$line.dns”
echo “$line” | Out-File -FilePath $rollback_file -Encoding ascii -append
sleep 1
}
}
}

The Malwaredomains.com team often takes down sites off the list that were temporarily added, or wrongfully added. I created a roll back script.
This script can be found on pastebin
And all three of these scripts can be found on my github.
Make sure to modify the top variable to fit your environment and match the original script:

$rollback_path = "C:\scripts\current_roll_back.list"

$domain_content = get-content $rollback_path
$zone_list = get-dnsserverzone | where {$_.IsDsIntegrated -eq $false} | select -expandproperty Zonename

foreach($line in $domain_content) {
if($zone_list -contains $line) {
Remove-DnsServerZone “$line”
sleep 1
}
}

I would suggest setting up 2 scheduled tasks.
One to run weekly adding new domains to the list and keeping the list up to date.
The second task would run the roll back. Clearing out wrongly marked domains, etc. Though how you choose to manage it is up to you.

Since I was messing around with the files anyway I also made a host file generator. Host files are not really a preferable method as there have been reports of large host files slowing down browsing and the like. That said I do use a version of this script on my personal computers and haven’t seen an issue. Malwaredomains.com doesn’t offer host files, but they offer a list of some great host files. That limits the use of the bellow script, but I do like adding my own hostfile entries to the top of the script and run it as a scheduled task once a week.
This can be found on pastebin
And all of the code can be found on my github

Change the variables at the top to fit your needs. I even made it easy to setup a reroute ip to a branded warning for businesses. I would point out that this doesn’t protect against sub domains and the like.


$host_file_path = "C:\windows\system32\drivers\etc\hosts_tmp"
$final_loc = "C:\windows\system32\drivers\etc\hosts"
$tmp_file_holder = ".\current_list.bk"
$reroute = "127.0.0.1"

$Host_File_header = “# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a ‘#’ symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host

# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost”

echo “$Host_File_header” > $host_file_path

$domain_list = invoke-webrequest http://mirror1.malwaredomains.com/files/domains.txt | select -expandproperty content
$domain_list -replace “`t”, “;” -replace “;;” > $tmp_file_holder
$domain_content = get-content $tmp_file_holder
foreach($line in $domain_content){
if(-not($line | select-string “##”)) {
$line_tmp = $line -split “;”
$line = $line_tmp[0]
echo “$reroute $line” >> $host_file_path
}
}

move-item $host_file_path $final_loc -force

All the code can be found on my github.com

Thanks for reading.

Using Powershell to notify when an email is involved in a data breach.

This week I worked with the Have I been Pwned API. I came up with a pretty use full little script that monitors Email addresses and notifies you if one of them is signed up for a compromised service. Have I been Pwned offers a service for this Here. Which is nice for individual accounts but if your at a business with hundreds of employees you don’t want to be adding accounts manually or sometimes you want to be emailed if someones account is on a pwned. That is where these scripts come in.

The scripts can be found:
Monitor script designed to work with AD can be found on pastebin here.
Monitor script using an array of emails can be found on pastebin here.
Additional functions made from this project can be found on pastebin here.
And as always My github has the full collection.

There are two versions of the monitor script, one with an array you can configure the email addresses manually. The other that pulls directly from active directory. A note: the monitor scripts do not care about the age of the breach. If haveibeenpwned.com gets information on a new breach that happened in 2001 and a users email over laps the user will be notified. After a breach has been identified it is logged and the user isn’t bothered again. The script also “stacks” breaches into one email so as not to spam your users with 100’s of emails.

An email for multiple breaches looks like:

1

This email is customize-able in the configuration section of the script.

Other quick notes on the use of the script before I go into configuration details. I would suggest not running the script more than once a month, or once a week at the most. The breaches can be old at times and constantly hammering the API will not do any good. There is also a sleep 5 in the script. Feel free to adjust it, I left it in to make sure larger accounts wouldn’t constantly query the API causing issues.

Configuration options for these scripts:

#Make sure the path exists or you will spam your list every time the script runs:
$path_to_notified_file = ".\db\pwnd_list.csv"

This is the database file that keeps the script from spamming your users. Make sure it is correct and writable or your users will be notified repeatedly.

Do you even want to send an email? With $email_notify set to $false the script just generates a CSV file. This lets you build a basic database of old breaches without annoying your users OR determining how many user emails have been involved in breaches.

#SMTP settings:
$email_notify = $true

Customize the Email alert the users will get:

$from = "test@example.com"
$subject = "ATTN: Account was included in a data breach"
$body_html = "Hello,
It has been noticed by an automated system that your email address was included in the following data breaches:"
$body_signature = "
It is recommended you change your passwords on those systems

Thank you
I_script_stuff Notifier Bot

#email credentials enable tested on gmail. If you don’t need credentials set $needs_email_creds to false.
$needs_email_creds = $false
#configure credential file for email password if needed:
$creds_path = “.\cred.txt”
#read-host -assecurestring | convertfrom-securestring | out-file $creds_path

The $needs_email_creds option needs you to setup a password if set to $true. This works on gmail but I haven’t tested it on other systems.
First load the $cred_path variable and then copy and paste the read-host line without the comment like so:

2
The script doesn’t prompt for anything. Just type your password for the email account and press enter. The password will be stored in the file.

Last bit you need to configure is SMTP server settings:

#SMTP server to use
$smtp = "smtp.gmail.com"
$smtp_port = "587"

Configured for google, you’ll need to know your own SMTP server settings.Monitor script designed to work with AD can be found on pastebin here.
Monitor script using an array of emails can be found on pastebin here.
Now your all set to monitor your corporate environment for breaches involving services your users may have signed up for on there email.

 

Additional functions made from this project can be found on pastebin here.


get-breachedstatus:


function get-breachedstatus() {
Param(
[Parameter(Mandatory = $true)][string]$email,
[AllowEmptyString()]$brief_report="$true"
)

try{
if($brief_report) {
$url = “https://haveibeenpwned.com/api/v2/breachedaccount/” + $email + “?truncateresponse=true”
} else {
$url = “https://haveibeenpwned.com/api/v2/breachedaccount/” + $email
}
$result = invoke-restmethod “$url” -UserAgent “I_script_stuff checker 0.01”
return $result
} catch {
return $false
}
}

This function is what powers the notified script. In the script it does a “truncated response” you can get some interesting information from a none truncated response:
Command:
Get-breachedstatus test@example.com

get-pastestatus:
This searches the API for paste breaches and provides the dump. There is no truncated response it just is the response:

function get-pastestatus() {
Param(
[Parameter(Mandatory = $true)][string]$email
)
try{
$url = "https://haveibeenpwned.com/api/v2/pasteaccount/" + $email
$result = invoke-restmethod $url -UserAgent "I_script_stuff checker 0.01"
return $result
} catch {
return $false
}
}

Get all breaches dumps the full database of breaches in case you want to cache them:

function get-allbreaches() {
try{
$url = "https://haveibeenpwned.com/api/v2/breaches"

$result = invoke-restmethod “$url” -UserAgent “I_script_stuff checker 0.01”
return $result
} catch {
return $false
}
}

Get Domain Stauts queries a specific domain for a breach:

function get-domainstatus() {
Param(
[Parameter(Mandatory = $true)][string]$domain,
)
try{
$url = "https://haveibeenpwned.com/api/v2/breach/" + $domain
$result = invoke-restmethod $url -UserAgent "I_script_stuff checker 0.01"
return $result
} catch {
return $false
}
}

You can use these functions to get started with any larger projects. Please skim the api documentation for fair use rules. Though mostly it is don’t do evil.

That is it for this week.  Thanks.

Google SafeSearch API, Google Locations, and more.

A short post this week. I have been messing with the google API and Powershell. The google APIs tend to follow a pattern so if nothing else I hope these functions work as a solid example. That said I like to make sure stuff is at least a little useful outside of examples.

My favorite is the safe browsing api. Google constantly monitors sites for Malware, Social Engineering and other attacks. This is the same system that warns Google users before clicking a link on a search. Some uses for the function would be to watch domains you own being flagged either correctly or a incorrectly. I was also kicking around the idea of doing an Active directory DNS log parser that caches the queries and compares them to the safe browsing database. This might catch users going with unsafe behavior and the like in a corporate environment.

You can find the function on pastebin here or all the scripts are on my github

Safe browsing function:
#Always fails safe search: malware.testing.google.test/testing/malware/
function get-BrowseSafe() {
Param(
[Parameter(Mandatory = $true)][string]$search,
[AllowEmptyString()]$api_key=""
)
#build the json object to send to google:
$json = '{
"client": {
"clientId": "BrowseSafe_monitor",
"clientVersion": "1"
},
"threatInfo": {
"threatTypes": ["MALWARE", "SOCIAL_ENGINEERING"],
"platformTypes": ["ANY_PLATFORM" ],
"threatEntryTypes": ["URL"],
"threatEntries": [

#loop through multiple semi-colon delimited urls
$search_arr = $search -split “;”
$count_max = $search_arr.count
$count = 1
foreach($item in $search_arr) {
if($count -eq $count_max) {
$json = $json + “{“”url””: “”$item””}”
} else {
$json = $json + “{“”url””: “”$item””},”
}
$count++
}
#close Json
$json = $json + ‘]}}’

#build url with correct api_key
$url = “https://safebrowsing.googleapis.com/v4/threatMatches:find?key=” + $api_key
try {
$result = Invoke-RestMethod “$url” -Method POST -Body $json -ContentType ‘application/json’
$result = $result.matches
} catch {
echo $_
}

if($($result.count) -eq 0) {
return $false
} else {
return $result
}
}

Search-custom uses the google custom search API. Which allows you to setup your own set of sites and query from googles search database.  You have to build a list of sites and base the search themes off of those. Each search list has its own id that can be updated. So the function provided can be used as more of a template for search sites you come up with. Be sure to get the customsearch_id provided by google after you configure your projects search list.

You can find the function on pastebin here or all the scripts are on my github

function search-custom() {
Param(
[Parameter(Mandatory = $true)][string]$search,
[AllowEmptyString()] $customsearch_id = "<provide your own search id with google>",
[AllowEmptyString()]$api_key="<your key>"
)

try {
$Search_results = invoke-restmethod “https://www.googleapis.com/customsearch/v1?q=$search&cr=us&cx=$customsearch_id&key=$api_key”
$search_results = ($search_results.items) | select title, snippet,link
return $search_results
} catch {
return $false
}
}

Search-nearby utilizes the google locations API. This is what recommends restaurants and the like near a location. I wanted this in my powershell profile so I ended up writing a quick wrapper.

You can find the function on pastebin here or all the scripts are on my github

function search-nearby() {
Param(
[Parameter(Mandatory = $true)][string]$search,
[AllowEmptyString()]$api_key="<your key>"
)

try {
$Search_results = invoke-restmethod “https://maps.googleapis.com/maps/api/place/textsearch/json?query=$search&key=$api_key”
$search_results = ($search_results.results) | select Name,types,Formatted_address,price_level,Rating
return $search_results
} catch {
return $false
}

}

The other 2 functions for this post are youtube based. It is worth checking out the youtube api directions. I found the youtube API the most confusing, and I still havn’t spent the time to figure out how to post videos. At least these will get you started with exploring youtube.

You can find the function on pastebin here or all the scripts are on my github


function Get-youtubesearch() {
Param(
[Parameter(Mandatory = $true)][string]$search,
[AllowEmptyString()]$max_page = 5,
[AllowEmptyString()]$copyright = "any",
[AllowEmptyString()]$youtube_key=""
)

$Search_results = invoke-restmethod “https://www.googleapis.com/youtube/v3/search?part=snippet&q=$search&type=video&videoLicense=$copyright&key=$youtube_key”
$page_count = 1
$video_list = @()

while(($Search_results.nextPageToken) -and ($page_count -le $max_page)) {
$next_page=$Search_results.nextPageToken

foreach($video_info in $search_results.items) {
$video_id = $video_info.id.videoid
$video_stats = invoke-restmethod “https://www.googleapis.com/youtube/v3/videos?part=statistics&id=$video_id&key=$youtube_key”
[int]$views = $video_stats.items.statistics.viewcount
[int]$likes = $video_stats.items.statistics.likecount
[int]$dislikes = $video_stats.items.statistics.dislikeCount
$title = $video_info.snippet.title
$link = “https://youtube.com/watch?v=$video_id”

$video_list += new-object psobject -Property @{
title = “$title”;
video_id = “$video_id”;
likes = $likes;
dislikes = $dislikes;
views = “$views”;
link = “$link”;
}

}

$Search_results = invoke-restmethod “https://www.googleapis.com/youtube/v3/search?part=snippet&pageToken=$next_page&type=video&q=$search&videoLicense=$copyright&key=$youtube_key”
$page_count++
}

return $video_list
}

function get-youtubepopular() {
Param(
[AllowEmptyString()]$max_page = 5,
[AllowEmptyString()]$copyright = “any”,
[AllowEmptyString()]$youtube_key=””
)

$Search_results = invoke-restmethod “https://www.googleapis.com/youtube/v3/videos?chart=mostPopular&key=$youtube_key&part=snippet”
$page_count = 1
$video_list = @()

while(($Search_results.nextPageToken) -and ($page_count -le $max_page)) {
$next_page=$Search_results.nextPageToken

foreach($video_info in $search_results.items) {
$video_id = $video_info.id
echo “second search”
$video_stats = invoke-restmethod “https://www.googleapis.com/youtube/v3/videos?part=statistics&id=$video_id&key=$youtube_key”
[int]$views = $video_stats.items.statistics.viewcount
[int]$likes = $video_stats.items.statistics.likecount
[int]$dislikes = $video_stats.items.statistics.dislikeCount
$title = $video_info.snippet.title
$link = “https://youtube.com/watch?v=$video_id”

$video_list += new-object psobject -Property @{
title = “$title”;
video_id = “$video_id”;
likes = $likes;
dislikes = $dislikes;
views = $views;
link = “$link”;
}

}

$Search_results = invoke-restmethod “https://www.googleapis.com/youtube/v3/videos?chart=mostPopular&pageToken=$next_page&key=$youtube_key”
$page_count++
}

return $video_list
}

Using Powershell with the RingCentral API

I had the opportunity to work with the RingCentral API. I ended up with a series of functions that will handle the authorization/token information for you as long as you don’t mind using the Password authorization work flow. [When getting the API key](https://developer.ringcentral.com/library/getting-started.html). Make sure the “Platform Type” is set to Desktop (Mac/Windows/Other).

Code covered in this post can be found on pastebin or on my github
Quick Review of the Process and API:

RingCentral’s API is still pretty new, it looks like it has been around for about a year and half. There are some strict requirements needed before going to their production environment from the sandbox environment:
1) All Permissions requested must be used

2) A total of 20 calls must be made and each permission must be used at least once.

3) Out of all calls made over 48 hours no more than 5% can generate errors (getting throttled, bad calls, etc.)

4) It can take up to 7 days for your app to be approved.

Much of the detection is automatic, but it can take several hours for it to pickup your usage and errors. None of the requirements from the sandbox app to the production app are unreasonable but I found the process a bit clunky. Another interesting quirk is not all permissions for your app can be requested from the portal. You’ll need to contact email support for something like downloading call recordings. The email support is pretty responsive and I was pleased with that. I do wish the sandbox environment came with bogus data already populated. Rather than explore and get ideas what I can do with the API I kept spending time building fake data to resemble the production environment. Not horrible, but something that could be a bit more friendly.

The API documentation lacked any examples for Powershell. Though they didn’t have any examples, the documentation is generalized and written well enough that it wasn’t show stopping to figure out how to get a few examples done.

Before I go into the authorization scripts I want to focus on the global variables and configuration.

To get started:

$api_server_url = "https://platform.devtest.ringcentral.com"
$media_server_url = "https://media.devtest.ringcentral.com:443"
$username = ''
$password = ''
$extension = ''

#Base64 encoded strings as : Appkey:Appsecurity_key use this site: https://www.base64encode.org/ or build your own. it never changes so meh
$app_key = “”

$log_path = “C:\scripts\log\ringcentral.log”

All of the information can be found at https://service.devtest.ringcentral.com and https://developer.ringcentral.com/library/tutorials/get-started.html . The $app_key I didn’t bother to do dynamically since it is set and forget. You can go to most websites and get it encoded.
Make sure your $log_path is pointed to the log file.

You can test if everything is working by running get-authstatus.
If it returns true you’re ready to start running API calls. If you get false check the $log_path file and check your authorization information for typos.

If you want to use the auth functions in your own scripts you only really need a small chunk of code:

if(get-authstatus) {
$auth_token = $auth_obj.authorization
} else {
return $false
}

An example of a function making a call to the RingCentral API:

Function get-calllog() {
if(get-authstatus) {
$auth_token = $auth_obj.authorization
} else {
return $false
}
try {
$call = invoke-restmethod -uri "$api_server_url/restapi/v1.0/account/~/call-log" -Headers @{"Authorization" = "$auth_token"}
} catch {
return $false
}
return $call
}

The code can be found on pastebin or on my github

Besides the global calls the whole thing is 3 functions:

get-authstatus :

function get-authstatus() {

if(($auth_obj.expires_in -gt (get-date)) -and ($auth_obj.isset -eq $true)) {
return $true
} elseif(($auth_obj.expires_in -lt (get-date)) -and ($auth_obj.isset -eq $true) -and ($auth_object.refresh_token_expires_in -gt (get-date)) {

if(auth_refresh) {
echo “Token expired and refreshed successfully” >> $log_path
return $true
} else {
echo “Failed Token refresh” >> $log_path
return $false
}

} else {

if(auth_initiate) {
echo “Initializing Auth token”>> $log_path
return $true
}
}
}

This function checks that the initial authorization has been done, and none of the tokens have expired. If they have expired, it calls the correct function to initialize the authorization or to renew it.

auth_initiate:

function auth_initiate() {

#Authentication post data
$auth_post = @{
grant_type = ‘password’
username = $username
password = $password
extension = $extension
}

$headers = New-Object “System.Collections.Generic.Dictionary[[String],[String]]”
$headers.Add(“Authorization”, “Basic $app_key”)
$headers.Add(“Content-Type”, “application/x-www-form-urlencoded;charset=UTF-8”)

try {
$url = $api_server_url + “/restapi/oauth/token”
$auth_token = invoke-restmethod -Method Post -Uri $url -Body $auth_post -headers $headers -ContentType “application/x-www-form-urlencoded”
$authorization = $auth_token.token_type + ” ” +$auth_token.Access_token
} catch {
echo “Error refresh token: $_” >> $log_path
return $False
}

$Global:auth_obj = [PSCustomObject] @{
Isset = $true
authorization = $authorization
refresh_token = $auth_token.refresh_token
expires_in = (Get-date).Addseconds($auth_token.expires_in)
refresh_token_expires_in = (Get-date).Addseconds($auth_token.refresh_token_expires_in)
scope = $auth_token.scope
owner_id = $auth_token.owner_id
endpoint_id = $auth_token.endpoint_id
}

return $auth_obj
}

The above function creates the initial authorization token, and changes the time the token expires into the local system time. It does the same for the refresh token’s expiration as well. All of that information is populated into the authorization global object.

auth_refresh:

function auth_refresh() {
$refresh_post = @{
grant_type = 'refresh_token'
refresh_token = $auth_token.refresh_token
}

$url = $api_server_url + “/restapi/oauth/token”

$headers = New-Object “System.Collections.Generic.Dictionary[[String],[String]]”
$headers.Add(“Authorization”, “Basic $app_key”)
$headers.Add(“Content-Type”, “application/x-www-form-urlencoded;charset=UTF-8”)

try {
$auth_token = invoke-restmethod -Method Post -Uri $url -Body $refresh_post -headers $headers -ContentType “application/x-www-form-urlencoded”
$authorization = $auth_token.token_type + ” ” +$auth_token.Access_token
} catch {
echo “Error refresh token: $_” >> $log_path
return $false
}

$Global:auth_obj = [PSCustomObject] @{
Isset = $true
authorization = $authorization
refresh_token = $auth_token.refresh_token
expires_in = (Get-date).Addseconds($auth_token.expires_in)
refresh_token_expires_in = (Get-date).Addseconds($auth_token.refresh_token_expires_in)
scope = $auth_token.scope
owner_id = $auth_token.owner_id
endpoint_id = $auth_token.endpoint_id
}

return $auth_obj
}

If the token has expired it uses the refresh token to re-authorize the call.

That is it for today. Thanks for reading.