Cloud-based DNS monitoring with IPinfo Enrichment

Cloud-based DNS monitoring with IPinfo Enrichment
IPinfo enriched DNS query logging

As the Log4Shell vulnerability progressed last year, there were extensive efforts to setup callback environments for data collection, proof-of-concept development, and vulnerability validation.

This intrigued my curiosity and led me to setup my own hosted Interact server (https://github.com/projectdiscovery/interactsh) by Projectdiscovery running in Digital Ocean and highly recommend it as it provides much more extensibility beyond the DNS queries that I will describe within this article. In the future, I'd like to integrate the notify functionality with this solution for the IPinfo enrichment. The overall setup is very easy and everything just works. However, I really wanted to understand the inner workings of DNS, why it works, and determine whether it truly is a usable attack vector. In order to augment hobby research with my day job, I chose to pursue cloud native services within AWS to develop a simple ecosystem to demonstrate the importance and value of DNS security.

Additionally, I wanted to highlight the IPinfo service which has tons of use cases beyond this one and is certainly worth checking out. Even if you do not specifically build the DNS dashboard, the Lambda functions and Python code is reusable and scalable to enrich data feeds and log sources throughout your environment.

At the conclusion of the article, you will be able to automatically generate an IPinfo map and basic HTML dashboard with DNS query information.

https://ipinfo.io/tools/map/9d0f5a95-9327-4af1-ad99-4c64d39e786b

The remainder of this article will cover:

  • Overview of DNS
  • Security components of DNS
  • Building the environment
  • Data enrichment using IPinfo service
  • Visualizing the data
  • Links to the Terraform and code - coming soon

If you do not want to build it yourself, you can checkout the final product dashboard at https://dashboard.icicles.io which refreshes every 4 hours with DNS traffic to icicles.io.

The tutorial utilizes the AWS console although I have been working on building out all of the Terraform so that it can be fully deployed as code. It is not quite all there yet, but I will continue working to get it fully finished. I have been wanting to get this article released and decided not to wait until all of the Terraform was perfect. The code is available at https://github.com/brevityinmotion/dnsdashboard.

Overview of DNS

DNS is an acronym for Domain Name System. CloudFlare has an excellent article explaining DNS and describes it as "the phonebook of the Internet" and it "translates domain names to IP addresses" so that "humans do not need to memorize IP addresses".

Beyond the standard usage of DNS, it is also a vector for attackers to exfiltrate information, often in blind attack scenarios. In order to execute an attack, the researcher must log and track these DNS queries. In order to gain this level of visibility, we have the ability to configure authoritative nameservers for domains that we own. This authoritative nameserver provides the IP address back to the requestor if there is a match. The valuable piece of this setup is that since we control the authoritative server, we can log all of the lookup requests, whether or not they are valid for a domain name that we own.

Security Components of DNS

Why does this matter? — Data exfiltration

In corporate networks, a good firewall policy blocks all internal systems from making connections directly outbound. A common strategy is the use of proxy services for controlled egress to apply security protections such as category blocking, malware detection, and traffic logging rather than allowing users to make connections straight to the internet. In some situations, DNS traffic is permitted directly outbound through the firewall. Typically, there are additional security layers such as deep packet inspection (DPI) which would only permit valid protocol traffic so you could not necessarily setup something like a reverse shell as it would fail protocol inspection. However, let's say that the environment is hardened and all external DNS traffic must go through an on premise hosted DNS resolver still on the internal network.

When an end user needs to perform a DNS lookup, their request is sent to a DNS resolver server, which is typically configured within their network interface and may be managed through services such as group policy, mobile device management (MDM), hard-coded, or inherited from an Internet Service Provider (ISP). For corporations, these resolvers should be managed and monitored for proactive security defense. For personal computing, you will typically inherit configurations from your Internet Service Provider (ISP) although there are privacy, ad-blocking, and security services where home users can take advantage of similar security functionality that corporations provide. Example services worth checking out include OpenDNS and Pi-hole.

From a Blue Team perspective, DNS resolver logs have a trove of information. Various use cases may include security defenses such as:

  • Correlating DNS requests with threat feed indicators of compromise (IOCs) to identify malware, threat groups, or persistence within the environment.
  • The ability to respond with a monitored "sinkhole" IP address for a list of known malicious domains to capture packets.
  • Categorical blocking of DNS requests destined for intentionally blocked sites such as weapons, hate speech, adult content, or gambling.

The Blue Team is either proactively stopping requests at the resolver to protect users and the surrounding organization or they are continually analyzing requests for anomalous behavior or patterns to respond to and investigate. This is an area that may have potential related to artificial intelligence for pattern, trend, and outlier recognition.

From the attacker perspective, if that request can make it through the security layers on the initial Resolver, the request will be forwarded to the Internet (leaving the internal corporate firewall and likely through a specific port/destination allowance to a root nameserver, then to a top level domain (TLD) nameserver, and then eventually to the attacker's managed authoritative DNS server. Once the request reaches the authoritative DNS server, it does not matter whether or not the subdomain is valid as the value is in the content of the query subdomain. Within these requests, malicious exfiltration could occur such as blind attacks which may transmit an environmental value, an AWS metadata secret, local hostname, sending etc/passwd words in groups, updating command and control information, encryption keys, token information, or any additional internal information that the attacker or malicious software has access to.

Due to the pervasiveness of DNS being a backbone technology of IT systems, this is typically a valid path in nearly all scenarios as we know that, a fault in DNS breaks everything...

From https://ih1.redbubble.net/image.1035308582.8011/st,small,507x507-pad,600x600,f8f8f8.jpg

Building the Environment

In theory, this all made sense, but does it really work, is it feasible to configure, and what interesting methods can we apply to this?

This solution was fully developed within Amazon Web Services (AWS) and utilizes a third-party API based integration with IPinfo which I highly recommend using.

Configuring Route 53 DNS

AWS offers a DNS service called Route 53. In order for this to work, you will need to either transfer an existing domain into Route 53 or purchase a new one.

Once you have a domain within your control, you can create a hosted zone which establishes your own authoritative DNS as was previously discussed.

  • On the left-side of the console while in the Route 53 service, click Hosted zones
  • Click "Create hosted zone"
  • Continue to create a Public hosted zone using the Domain name that you own and have bought or imported into Route 53.

Configure CloudWatch logs

Once the hosted zone is created, you will want to click "Configure query logging" as it is not enabled by default.

Within the query logging configuration, select "Create log group" and provide a name to the log group. I prefer to utilize the recommended format of /aws/{service}/{domain}.

For the permissions section, you can select the Built-in AWSServiceRoleForRoute53 which grants access to all of the log groups or if desired, you can create a specific service role limited to the specific log group.

At this point, you should be able to test the setup to see if you are logging DNS requests for the domain.

Testing the query logging functionality

To test, you can open a browser and attempt to go to a site with the hosted domain. The subdomain does not have to be valid and you can see in the following test, the DNS A record does not exist.

Searching the DNS query logs

In order to search the logs within the AWS console, navigate to CloudWatch --> Log groups --> and then select the log group that you created /aws/route53/domain. If you do not see it immediately, you can refresh the Log events several times. Depending on the volume of queries to the domain, you'll quickly see many groupings of entries and it immediately becomes tedious to search. However, the test event should appear. At this point, the DNS query exfiltration vector is now valid to utilize against a DNS resolver.

Data enrichment using IPinfo service

Since the raw DNS logs only have IP address CIDR ranges, these alone do not provide significant value beyond tracking repeat requests or attempting to match against specific CIDR Indicators of Compromise (IoCs). This is where enrichment services bring extensive value to solutions. With the IPinfo service, we can add the origination city, region, country, latitude, longitude, postal code, timezone, and asn and store the information alongside the DNS details within the DynamoDB table.

Screenshot from ipinfo.io

Preparing the Lambda function

In order to make the CloudWatch log data usable, we need to process the log event. The first step is to write Python code and deploy it as a serverless function using a service called AWS Lambda.

The Lambda does the following:

  • Queries AWS Secrets Manager for the IPInfo access token. It is always important to keep secrets/passwords/tokens outside of source code. The code will dynamically load it as a variable at runtime.
  • Ingests the log event and extrapolates the event data into a list of json objects.
  • Iterates over the json list and loads the relevant event fields into variables.
  • Splits the CIDR range into a single IP address (the resulting IP address will always end with a zero as the DNS query logging only provides the /24 CIDR range, which is still very specific).
  • Utilizes the IPinfo Python library to retrieve the IP information from the IPinfo API.
  • Stores the event data and the IP enrichment into DynamoDB for future retrieval and lookup.
Setup IPInfo token and load into AWS Secrets Manager

To prepare the access to the IPinfo API, you must first register with the service and obtain at least the free tier API key. The token can be retrieved from the Token section of the account dashboard. There is an allowlist section for the token which initially raised some concern as I did not want to pay for a managed NAT gateway for an IP address when running the Lambda within a VPC, although the setup successfully worked without defining any specific domains (phew).

https://ipinfo.io/account/token

With that token, we must now load it into AWS Secrets Manager. The Lambda will look for a secret named "brevity-recon-apis" so you should utilize that as the name unless you plan to modify it within the Lambda code. Then, in the details, you'll add the secret value in the key/value pair format of ipinfo:SECRETTOKEN.

Configuring the Lambda function

The next section is likely one of the messier and more confusing components of this setup. The Terraform code is in process for this and will make it much more easier to deploy. I will update the article referencing that information once it is ready. The entire Lambda function looks like this:

import json, boto3, os, re
import gzip
import base64
import ipinfo
from io import BytesIO
from botocore.exceptions import ClientError

def lambda_handler(event, context):
    
    dynamodbclient = boto3.client('dynamodb')
    cw_data = str(event['awslogs']['data'])
    cw_logs = gzip.GzipFile(fileobj=BytesIO(base64.b64decode(cw_data, validate=True))).read()
    log_events = json.loads(cw_logs)
    
    # Retrieve an AWS Secrets Manager secret
    def get_secret(secret_name, region_name):

        # Create a Secrets Manager client
        session = boto3.session.Session()
        client = session.client(
            service_name='secretsmanager',
            region_name=region_name
        )

        # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
        # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
        # We rethrow the exception by default.

        try:
            get_secret_value_response = client.get_secret_value(
                SecretId=secret_name
            )
        except ClientError as e:
            if e.response['Error']['Code'] == 'DecryptionFailureException':
                # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                # Deal with the exception here, and/or rethrow at your discretion.
                raise e
            elif e.response['Error']['Code'] == 'InternalServiceErrorException':
                # An error occurred on the server side.
                # Deal with the exception here, and/or rethrow at your discretion.
                raise e
            elif e.response['Error']['Code'] == 'InvalidParameterException':
                # You provided an invalid value for a parameter.
                # Deal with the exception here, and/or rethrow at your discretion.
                raise e
            elif e.response['Error']['Code'] == 'InvalidRequestException':
                # You provided a parameter value that is not valid for the current state of the resource.
                # Deal with the exception here, and/or rethrow at your discretion.
                raise e
            elif e.response['Error']['Code'] == 'ResourceNotFoundException':
                # We can't find the resource that you asked for.
                # Deal with the exception here, and/or rethrow at your discretion.
                raise e
        else:
            # Decrypts secret using the associated KMS CMK.
            # Depending on whether the secret is a string or binary, one of these fields will be populated.
            if 'SecretString' in get_secret_value_response:
                secret = get_secret_value_response['SecretString']
                return secret
 
            else:
                decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                return json.loads(secret)
    
    def retrieveIPInfo(ip_address):
        # Retrieve API key for IPInfo
        secretName = "brevity-recon-apis"
        regionName = "us-east-1"
        secretRetrieved = get_secret(secretName,regionName)
        secretjson = json.loads(secretRetrieved)
        access_token = secretjson['ipinfo']
        handler = ipinfo.getHandler(access_token)
        details = handler.getDetails(ip_address)
        return details
    
    for log_event in log_events['logEvents']:
        queryid = log_event['id']
        log_event = log_event['extractedFields']
        dns_timestamp = log_event['timestamp']
        dns_zoneid = log_event['zoneid']
        dns_queryname = log_event['queryname']
        dns_querytype = log_event['querytype']
        dns_responsecode = log_event['responsecode']
        dns_protocol = log_event['protocol']
        dns_edgelocation = log_event['edgelocation']
        dns_resolverip = log_event['resolverip']
        dns_clientsubnet = log_event['clientsubnet']
        # Since the DNS query only provides a CIDR range, it converts it to the starting IP address of the range
        dns_clientipaddress = dns_clientsubnet.split('/', 1)[0]
        response = retrieveIPInfo(dns_clientipaddress)
        ipinfo_ip = response.ip
        ipinfo_city = response.city
        ipinfo_region = response.region
        ipinfo_country = response.country
        ipinfo_loc = response.loc
        ipinfo_org = response.org
        ipinfo_postal = response.postal
        ipinfo_timezone = response.timezone
        ipinfo_country_name = response.country_name
        ipinfo_latitude = response.latitude
        ipinfo_longitude = response.longitude
    
        dynamoItem = {'queryid':{'S':queryid},'timestamp':{'S':dns_timestamp},'dns_zoneid':{'S':dns_zoneid},'dns_queryname':{'S':dns_queryname},'dns_querytype':{'S':dns_querytype},'dns_responsecode':{'S':dns_responsecode},'dns_protocol':{'S':dns_protocol},'dns_edgelocation':{'S':dns_edgelocation},'dns_resolverip':{'S':dns_resolverip},'dns_clientsubnet':{'S':dns_clientsubnet},'dns_clientipaddress':{'S':dns_clientipaddress},'ipinfo_ip':{'S':ipinfo_ip},'ipinfo_city':{'S':ipinfo_city},'ipinfo_region':{'S':ipinfo_region},'ipinfo_country':{'S':ipinfo_country},'ipinfo_loc':{'S':ipinfo_loc},'ipinfo_org':{'S':ipinfo_org},'ipinfo_postal':{'S':ipinfo_postal},'ipinfo_timezone':{'S':ipinfo_timezone},'ipinfo_country_name':{'S':ipinfo_country_name},'ipinfo_latitude':{'S':ipinfo_latitude},'ipinfo_longitude':{'S':ipinfo_longitude}}
        dynamoresponse = dynamodbclient.put_item(TableName='brevity_ipinfo',Item=dynamoItem)
    
    return {
        'statusCode': 200,
        'body': json.dumps(log_event),
        'dbstatus': dynamoresponse
    }

The Lambda does require a custom "Layer" which is the addition of imports that are not contained in the base Lambda runtime. The only custom import needed is the IPinfo Python library. However, for the second Lambda that we will create later in this tutorial, we will also need the Pandas and Requests libraries so we will add them now so that we can reuse the layer. Depending on your environment, you may need to adjust file paths to match your environment, but this is the script that I utilize to create the Lambda layer. You can also retrieve it directly from https://github.com/brevityinmotion/dnsdashboard/blob/main/build/brevity-ipinfo.zip. It is extremely modular and reusable as it creates a virtual Python environment and installs the necessary functions. You can add more packages to this list as long as you stay within the AWS Layer sizing hard-limits. The final command is the aws cli command to upload it to your AWS account. You will need to ensure your AWS credentials are configured for it to succeed.

#!/bin/bash

NEWLAYER="brevity-ipinfo"

## Creating a Lambda Layer
## Script credits from https://towardsdatascience.com/python-packages-in-aws-lambda-made-easy-8fbc78520e30

cd /home/ec2-user/environment/ipinfo/build/

mkdir $NEWLAYER
cd $NEWLAYER
virtualenv v-env
source ./v-env/bin/activate
## Install packages here
pip install pandas
pip install requests
pip install ipinfo
deactivate

## Next steps
mkdir python
cd python
cp -r ../v-env/lib64/python3.7/site-packages/* .
cd ..
zip -r $NEWLAYER.zip python
aws lambda publish-layer-version --layer-name $NEWLAYER --zip-file fileb://$NEWLAYER.zip --compatible-runtimes python3.7 python3.8

Now you will need to upload the Lambda file either directly via the AWS Lambda Console or script.

#!/bin/bash
LAMBDANAME="brevity-process-route53"
mkdir /home/ec2-user/environment/ipinfo/build/$LAMBDANAME
cp /home/ec2-user/environment/ipinfo/lambdas/lambda_function_$LAMBDANAME.py /home/ec2-user/environment/ipinfo/build/$LAMBDANAME/lambda_function.py
cd /home/ec2-user/environment/ipinfo/build/$LAMBDANAME
zip -r ../$LAMBDANAME.zip *
aws s3 cp /home/ec2-user/environment/ipinfo/build/$LAMBDANAME.zip s3://brevity-deploy/infra/
aws lambda create-function --function-name $LAMBDANAME --runtime python3.7 --handler lambda_function.lambda_handler --role arn:aws:iam::000000000000:role/brevity-lambda --layers arn:aws:lambda:us-east-1:000000000000:layer:brevity-ipinfo:1 --code S3Bucket=brevity-deploy,S3Key=infra/$LAMBDANAME.zip --description 'Performs Route53 DNS processing.' --timeout 300 --package-type Zip

If you are having difficultly building or uploading the Lambda, a working built zip file is at https://github.com/brevityinmotion/dnsdashboard/blob/main/build/brevity-process-route53.zip. Once the Lambda is deployed or created directly within the console, you will need to configure the CloudWatch Logs trigger. This establishes the relationship between a DNS log event and the execution of the Lambda against the event. With the "brevity-process-route53" Lambda open, click "Add Trigger" and you will need to configure it to match the following:

  • Log source: /aws/route53/domain
  • Filter name: brevity-route53-queries
  • Filter pattern: [logversion,timestamp,zoneid,queryname,querytype,responsecode,protocol,edgelocation,resolverip,clientsubnet]

Save the trigger by clicking "add".

If you created the Lambda from directly within the console, make sure to add the "brevity-ipinfo" layer.

The completed Lambda will look like this:

Creating the DynamoDB table

Before the Lambda function can effectively store the DNS events, a DynamoDB table must be created. Navigate to the AWS DynamoDB service and click "Create table". Configure the table with the following settings and click "Create".

  • Table name: brevity_ipinfo
  • Partition key: queryid
  • Sort key: timestamp
  • Default settings

If everything is working properly, you should be able to generate DNS requests and then see them within the DynamoDB table.

Visualizing the data

At this point, we have setup the DNS, collected the logs, enriched the logs, and stored the parsed logs within a database. We still need a method to reference and consume this information. Within an enterprise or broader ecosystem, we would likely do further analysis of the data such as comparing it to threat feeds, looking for anomalies, geographic regions, or identifying brute-force enumeration, and then potentially triggering preventive or responsive actions. Within this walkthrough, we will deploy a simple serverless HTML visual dashboard fronted by AWS CloudFront Content Delivery Network (CDN) and storing the web content in a S3 bucket.

A more common serverless pattern would be to host the static content within S3 and then utilize client-side javascript to call an API and retrieve the latest data at page load. This would guarantee the latest data and is my long-term planned approach. This functionality is not yet built so instead, we are going to run a background data refresh process configured at a set interval. It causes a delay in the dashboard data based on the runtimes of the process, but is sufficient for this tutorial.

Create the hosting S3 bucket

To begin, we need to create a S3 bucket.

  • Navigate to the AWS S3 service and select "Create bucket".
  • Name the bucket the same as your domain (i.e. dashboard.icicles.io)
  • Select a preferred region (for this setup, I have everything running within us-east-1).
  • Leave "ACLs Disabled" which is the default setting.
  • Disable the "Block All Access" setting and click the box to acknowledge the risks of public buckets. A future iteration as well as the Terraform will incorporate the Origin identity so that the bucket does not need to be public and can restrict access from the CloudFront origin.
  • Versioning can remain disabled.
  • Set Server Side Encryption to Enabled.
  • For the Encryption Key Type, select "Amazon S3-managed keys (SSE-S3)".
  • Leave Object lock set to disabled.
  • Click "Create bucket".

Once the bucket is created, you can click on the Permissions tab and paste in the following policy in order to grant public read access to the objects. Note: you will need to change the bucket name in the policy to match your specific bucket name.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::dashboard.icicles.io/*"
        }
    ]
}

While remaining within the Permissions tab, configure the Cross-origin resource sharing (CORS) section to reflect the following. Note that you will need to change the AllowedOrigin to your specific domain.

[
    {
        "AllowedHeaders": [
            "Authorization",
            "Content-Length"
        ],
        "AllowedMethods": [
            "GET",
            "POST"
        ],
        "AllowedOrigins": [
            "https://dashboard.icicles.io"
        ],
        "ExposeHeaders": [],
        "MaxAgeSeconds": 3000
    }
]

Next, navigate to the "Properties" tab for the bucket. Scroll to the bottom to the "Static website hosting" section and click Edit. Configure the following:

  • Static website hosting: Enabled
  • Hosting type: Host a static website
  • Index document: index.html
  • Error document: 404.html
  • Click "Save changes"

At this point, the S3 hosting bucket will be ready for utilization.

Create the AWS SSM Parameter Store entry

In order to modularize the code, the dashboard bucket information can be stored as a retrievable variable within AWS Parameter Store and is utilized by the Lambda function.

  • Navigate to AWS Systems Manager Parameter Store.
  • Click "Create parameter"
  • Name: dashboardBucket - This is the value referenced within the Lambda function.
  • Tier: Standard
  • Type: String
  • Value: dashboard.icicles.io (the exact name of your bucket)
  • Click "Create parameter"
Obtain a certificate for the dashboard website

Prior to setting up the dashboard website, a public certificate will be needed. Navigate to AWS Certificate Manager and click "Request certificate" and perform the following steps:

  • Certificate type: Request a public certificate
  • Fully qualified domain name: Add two domains: domain.com and *.domain.com. You likely could get by with a specific certificate such as dashboard.domain.com if you prefer.
  • Select DNS Validation
  • Click "Request"
Create the CloudFront CDN

In order to access the bucket, a content delivery network (CDN) can be setup in front of the S3 bucket using AWS CloudFront. Once you access the CloudFront service page, ensure that the "Distributions" section is selected and click "Create distribution".

The following settings will work for the configuration:

  • Origin domain: Utilize the dropdown to find the bucket you previously created.
  • Origin path: Leave blank
  • S3 bucket access: Don't use OAI (in the future, I intend to configure it to utilize an OAI policy).
  • Enable origin shield: No
  • Default cache behavior: All defaults are ok except for the cache settings. Since we want to ensure updated content each time the page is refreshed, configure the "Cache key and origin requests" to the following:
  • Cache policy: CachingDisabled
  • Price class - Use all edge locations
  • Custom SSL Certificate - Select the newly created certificate from the dropdown in the ACM certificate section.
  • Triple check to ensure that Legacy support is not selected to avoid exorbitant dedicated IP address charges.
  • Security policy: TLSv1.2_2021 (recommended)
  • Supported HTTP versions: HTTP/2 should be checked
  • Standard Logging: Off
  • IPv6: On (although I don't think this has implications either way)
  • Click "Create Distribution"

At this point, we have all of the web framing completed. Now we need to load the S3 bucket with a html file to serve to the browser.

Creating the HTML website

Earlier in the article, I mentioned a single page web app design which utilizes client-side javascript to request and load the data via an API call. To avoid making this design any more complex than it already is, we are going to approach this with a Lambda that queries the DynamoDB on a schedule and outputs a static HTML file with the data.

For anyone that is a Node.js or front-end web developer, you could probably have coded this in your sleep, but since I am not, I thought this approach was still fairly slick as it limits front-end attack surface by limiting the dashboard user to a purely static html page.

To begin, we will need to create the Lambda function. The code for the Lambda is below or can long-term be referenced in GitHub:

import pandas as pd
import boto3
import requests
import io
from requests.structures import CaseInsensitiveDict
from boto3.dynamodb.conditions import Key

def lambda_handler(event, context):
    
    def _getParameters(paramName):
        client = boto3.client('ssm')
        response = client.get_parameter(
            Name=paramName
        )
        return response['Parameter']['Value']
    
    dashboardBucketName = _getParameters('dataBucketName')
    
    def query_dns(dynamodb=None):
        if not dynamodb:
            dynamodb = boto3.resource('dynamodb')

        table = dynamodb.Table('brevity_ipinfo')
        response = table.scan(
            Limit=100
        )
        return response['Items']

    # Retrieve DNS entries from DynamoDB
    dnsResults = query_dns()
    # Load results into Pandas DataFrame
    dfDNS = pd.DataFrame(dnsResults)
    # Convert IP addresses column into a list
    ipList = dfDNS["ipinfo_ip"].tolist()

    # Retrieves the IPInfo Map URL
    def query_ipinfo(ipList):    
        headers = CaseInsensitiveDict()    
        url = "https://ipinfo.io/tools/map?cli=1"
        headers["Content-Type"] = "application/x-www-form-urlencoded"
        data = "@-" + str(ipList)
        resp = requests.post(url, headers=headers, data=data)
        response = resp.json()
        mapUrl = response['reportUrl']
        return mapUrl

    mapUrl = query_ipinfo(ipList)

    def generate_dns_html(dfDNS, mapUrl):
        resphtml = f"""<html>
        <title>Brevity In Motion - DNS Tracker</title>
        <body>
        <a href="{mapUrl}">IPInfo Map</a>
        """
        resphtml += dfDNS.to_html()
        resphtml += f"""
        </body>
        </html>
        """
        return resphtml

    resphtml = generate_dns_html(dfDNS, mapUrl)

    def upload_html(resphtml):
        filebuffer = io.BytesIO(resphtml.encode())
        bucket = 'dashboard.icicles.io'
        key = 'index.html'

        client = boto3.client('s3')
        response = client.upload_fileobj(filebuffer, bucket, key, ExtraArgs={'ContentType':'text/html'})
        return response

    response = upload_html(resphtml)
   
    return {
        'statusCode': 200
    }

This section of code is extremely versatile and reusable for other use cases for IPinfo integration. It takes a Pandas dataframe of IP addresses, converts it to a list and the entire list can be passed in bulk to IPinfo for processing. This is a really valuable feature and makes it so easy to incorporate and utilize into more use cases! I did not see specific Python requests library reference information for the map service within the IPinfo API documentation, but this is a Python native method to submit IP addresses and retrieve the map url from IPinfo. This URL is written as a clickable link in the dashboard. I don't think the map is currently possible to embed. For embedded maps, the coordinates have already been retrieved previously so it would be relatively simple to also swap in an alternate map such as an embedded Google map.

    # Convert IP addresses column into a list
    ipList = dfDNS["ipinfo_ip"].tolist()

    # Retrieves the IPInfo Map URL
    def query_ipinfo(ipList):    
        headers = CaseInsensitiveDict()    
        url = "https://ipinfo.io/tools/map?cli=1"
        headers["Content-Type"] = "application/x-www-form-urlencoded"
        data = "@-" + str(ipList)
        resp = requests.post(url, headers=headers, data=data)
        response = resp.json()
        mapUrl = response['reportUrl']
        return mapUrl

    mapUrl = query_ipinfo(ipList)

Another neat feature of the Pandas library is that a DataFrame can be converted to a HTML table. That is essentially what this Lambda does is it queries DynamoDB, loads the results into a DataFrame, converts the IP addresses to a list, submits them to IPinfo to generate a map, writes the DataFrame to a HTML table, and uploads it to the S3 bucket as a static page.

The Lambda code can either be directly pasted into a new Lambda function or the following script (with build environment path modifications will also upload it). Additionally, it will need the brevity-ipinfo layer added to the Lambda which was created in preparation for the initial Lambda.

#!/bin/bash
LAMBDANAME="brevity-operation-ipinfo"
mkdir /home/ec2-user/environment/ipinfo/build/$LAMBDANAME
cp /home/ec2-user/environment/ipinfo/lambdas/lambda_function_$LAMBDANAME.py /home/ec2-user/environment/ipinfo/build/$LAMBDANAME/lambda_function.py
cd /home/ec2-user/environment/ipinfo/build/$LAMBDANAME
zip -r ../$LAMBDANAME.zip *
aws s3 cp /home/ec2-user/environment/ipinfo/build/$LAMBDANAME.zip s3://brevity-deploy/infra/
aws lambda create-function --function-name $LAMBDANAME --runtime python3.7 --handler lambda_function.lambda_handler --role arn:aws:iam::000017942944:role/brevity-lambda --layers arn:aws:lambda:us-east-1:000017942944:layer:brevity-ipinfo:1 --code S3Bucket=brevity-deploy,S3Key=infra/$LAMBDANAME.zip --description 'Generates an IPinfo and Route53 logging dashboard.' --timeout 300 --package-type Zip

If there are difficulties, this Lambda is built at https://github.com/brevityinmotion/dnsdashboard/blob/main/build/brevity-operation-ipinfo.zip. Once the Lambda is loaded, you will want to run the test function with the default settings to generate the initial HTML file and to ensure that it works successfully.

Creating an Amazon EventBridge rule to schedule the Lambda

We can schedule the Lambda to run every 4 hours using the Amazon EventBridge service.

  • Navigate to Amazon EventBridge service.
  • Click "Rules" from the left side.
  • Click "Create rule"
  • Name: brevity-dashboard-refresh
  • Event bus: default
  • Select "Schedule"
  • Click "Next"

On the next page, select "A schedule that runs at a regular rate" and select 4 hours.

Click Next

For the Target, select "AWS Service" --> "Lamba" --> "brevity-operation-ipinfo".

Click "Next".

The remainder of the settings are defaults and then select "Create rule".

At this point, everything should be fully functional and ready for use.

To test the environment, navigate to the dashboard with your browser and if everything works, there should be a visible table of DNS history as well as a URL to the IPinfo generated map. Congratulations!

Lastly, if you utilize ProjectDiscovery interactsh, you can add a nameservice record with a specific subdomain within the Route 53 DNS hosted zone that was created and point it to your static IP address of your self-hosted interactsh server. Mine is running within DigitalOcean with a static IP address but I manage the DNS with Route 53.

I really appreciate you taking the time to check out this article. If you have enjoyed it, follow me on Twitter @ryanelkins for additional tutorials related to cloud security, bug bounty, and automation. Thank you!