For Pentesters

How I use the samma scanners to do recon when pentesting

I was tasked to do a first pentest against an AWS-hosted environment. This was an automated test to run before a more advanced pentest was planned. And the goal was to do a quick scan of the assets to catch any low-fruits targets and then be able to resolve them before the larger pentest was conducted.

The setup

To use as a base for the scanners a deployed a minikube Kubernetes on my local desktop. This so I could easily deploy targets and get a good overview of the result by using gafana and kibana.

To deploy the scanners I first test one scan with docker to verify the result then deployed the scanner as cronjob weekly into the minikube cluster.

The scanners

Base scanner

The first scanner I deployed was the base scanner from the samma repo. The base scanner takes a domain and does a full recon on that domain with the tool

docker run -e sammascanner/base

This will give a good overview of the domain what mail servers are used and what DNS provider DNS server. It will also try to find more DNS servers by dictionary testing on the DNS

Nmap Nikto and Tsunami

The old network scanner NMAP was used as well. The client’s full infrastructure where hosted din AWS and by extracting all the IP from the consul I had a full list of IP. And by setting up bash to that deployed scanners to all those IP I soon had a full list of host and open ports.

I then also added a nikto web scanner to run against the same host and tsunami scanner.

This opens the scans to go more into finding services behind the ports but also more aggressively test the targets for vulnerabilities.

while IFS= read -r line
  HOURE=$(shuf -i 7-10 -n 1)
  MINUTE=$(shuf -i 0-60 -n 1)

  #helm uninstall tsunami-ip-$NAME -n samma-io
  #helm upgrade --install tsunamo-ip-$NAME  --debug --set ip=$TARGET  /home/mahe/helm-repo/tsunami -n samma-io
  #helm uninstall $NAME -n samma-io
  #helm upgrade --install $NAME  --debug --set target=$TARGET --set schedule="$MINUTE $HOURE * * 5" --set image.tag=v0.2 /home/mahe/helm-repo/nmap -n samma-io
  #helm uninstall nikto-$NAME -n samma-io 
  #helm upgrade --install nikto-$NAME  --debug --set target=$TARGET --set schedule="$MINUTE $HOURE * * 5" /home/mahe/helm-repo/nikto  -n samma-io
done < "$input"

The findings!

From grafana, I had a dashboard showing all the ports that were open at first I had the regular ones port 80 and 443 and some 22. I skipped the web ports 80 and 443 and start looking at the ssh. Were they open and accepted a password? After some test, I quickly find out the ssh where all looked with key bases out. Then a new port comes up 3306 … Well, that’s a SQL server that is wide open and it should not be.

But from grafana, I could only see that the scanners had registered port 3306 was open. So I jumped into kibana and searched for “3306” and that quickly showed me the log record with the open port and the IP of the target.

To verify that the target was open a run the Nmap scanner in docker against the target and true the port was open. Over slack, I could send a message to the network team that quickly closed the port, and then to verify I run the Nmap docker again, and now the port did not come back as open.

During the scan, the samma scanners also picked up bad TLS settings and other smaller findings like service versions of Nginx and ssh that were outdated.

Moving on

To keep the system secure going forward and quickly detect if a new port comes up, or if an Nginx version is falling behind the team I set up samma scanners in their k8s cluster and set up weekly scans. Then adding alerts on open ports and if a new port that is not 80,443 ore ssh id detected a message goes into slack. There are also alerts on the number of Nginx hosts, Nginx versions are all the same, and so on. This keeps a good baseline for the company to work from


It was super easy to add scanners to all the targets with my script. But the first version of the script I did not random when the crontab and this ALLA scanners would start at once bringing my desktop down in flames. But after I had spread out the scanners over time it works much better and when you have the first weeks of scans run it’s simple to add markers in grafana to add you when something changes.

To use an Operator instead of the script a wrote will also be a better and easy way to deploy a scanner. So after this, I started working on building an operator for samma.

Extract all the IP from AWS. That is on the to-do list here.

Digging at the results in kibana is kind of interesting and fun when you find the targets you are looking for.