← All talks

Recursion is a Harsh Mistress

BSides Charleston · 202436:1230K viewsPublished 2024-11Watch on YouTube ↗
Speakers
Tags
CategoryTooling
DifficultyIntermediary
TeamRed
StyleTalk
About this talk
Recursion is a Harsh Mistress: The Dangers of Building a Recursive Internet Scanner by Joel Moore In this engaging talk, cybersecurity expert Joel Moore (TheTechromancer) dives into the complexities of building a recursive internet scanner, BeeBot, for discovering hidden vulnerabilities across massive networks. He shares insights into recursive scanning—an approach that can reveal subdomains and attack surfaces overlooked by traditional tools but comes with unique and challenging bugs that often result in unintended consequences. Key Topics: • What is Recursive Scanning? Learn how recursion uncovers hidden vulnerabilities and why it can be a double-edged sword in cybersecurity. • Challenges & Bugs: From endless loops to DNS wildcards and cloud misconfigurations, explore the unexpected pitfalls encountered during BeeBot’s development. • Power of Recursive Tools: Discover how recursive scanners like BeeBot provide a competitive edge for red teamers, blue teamers, and threat intelligence professionals. • Open-Source Tool: BeeBot is open-source, making it accessible to the cybersecurity community for OSINT and external attack surface management. Perfect for cybersecurity professionals, this talk offers valuable lessons in handling recursion safely and effectively. Whether you’re working on bug bounties or defending large networks, Joel’s journey with BeeBot provides insights into navigating the complexities of recursive scanning.
Show transcript [en]

Joel Moore: All right, hello, everyone. Welcome, hello, hello.

All right, OK, so welcome to recursion is a Harsh Mistress. So this talk is about recursion and about scanning the internet and how when you pair those things together, you get all kinds of crazy bugs. So these are not bugs that that I exploited. They're actually bugs that exploited me. But it's sort of a fun talk. Basically, it's going to be a peek behind the scenes of making a hacking tool, all the disasters that sort of befell us during this process, demos of things failing. You're going to see a lot of those, like pretty orange nodes that you saw at the beginning. Fun Facts About recursion, like, sort of why everybody knows what recursion is. And we like, we're

in cyber security, we know what it is. We kind of know that it's good for Osint, but hardly anybody uses it. And sort of why that is, but also we'll learn how, if you could use it, it could give you superpowers. And like, I, I'm, like, very serious about this, so we're going to be talking a lot about recursion. And when I say superpowers, like, basically it just, it will give you, it will give you an edge as a red teamer, as a blue teamer, and even, like as a threat intel person. So as long as you are interacting with the internet and you're gathering data from the internet, it will help you. Yeah. Also disclaimer,

I talk fast. I go fast. There's 100 slides, and we have 45 minutes. So let's go. Who am I? I'm Joel. My handle is the Tech Romancer. I basically like to build things. This is just the one thing you need to know about me when I was a little kid. This was me when I was six years old with my little Lego creation, which you can't see because my camera's blocking it. Basically I built Legos. Now I build Python but in my brain, they're totally the same thing. So I'm a pen tester at Black Lantern. Some of the Black Lantern folks, can I talk? Thank you. I'm the author of a couple tools, mainly Trevor spray and bee bot. Bee

bot is the recursive internet scanner in this talk. So I'll be talking a lot about that. All right, quick backstory. So, so this was back in 2022, Black Lantern was sort of like a small pen testing firm. We're actually headquartered here in Charleston, and, well, actually, this is my first B sides. I don't know why I've waited. I've been here for five years. I don't know why this is my first B sides. This is awesome. So, yeah, definitely becoming more but so we were a small firm. We had like 10 people, and we were just starting to, like, gain traction. So some bigger companies were starting to notice us. And so we started to do this thing where every time

we take on a new client, we would do this thing called white knuckle, where basically this company would bring us in, and maybe this company already had a red team, or they already had sort of a pen testing company. But our thing was basically, you'd bring us in, you'd give us full rein on your external attack surface, and we would basically do this big, multi month Red Team, which we called Operation white knuckle, and we'd find all the stuff that your red team missed. I host. So we were very rag tag, kind of a ragtag group of hackers. We each had our own sort of like suite of tools that we like to run. Obviously, Osint was a huge part of what we did. Because when you

bring on a company, and this company has, you know, like 10,000 or 20,000 subdomains, each one having a website on it. Basically, you want to make sure that you find every single one, because almost always, the vulnerability is going to be on the one that is hardest to find. So we all had open source tools that we used. You may recognize some of these, like a mass the harvester. These are just tools that we ran, and sometimes we'd run all of them and just like, sort of, like, sort unique the results, I'm sure, I'm sure anyone who's done Osint has done something similar to that. But the companies that we were bringing on started to get too big for this to work, and so we

basically were running up into a problem, which was that our open source tools weren't really cutting it anymore. So if you can think of like you've seen, the analogy of like, external attack surface compared to an iceberg, where, like, the subdomains you know about are like the tip of the iceberg, and then all the ones you don't know about are down below. And so yeah, what we realized was that we were missing a lot of stuff and and this wasn't really your average pen test, so we could not afford to miss stuff. Like our whole thing was, like, we need to find stuff that other people are not so, okay, so we basically hit this point that, like every startup hits sooner

or later, which is that we finally realized that the time was coming to an end where we could live off only other people's. GitHub, and we had to, like, make our own GitHub and actually write some of our own software. And since I was, like, sort of the builder, I mean, I wasn't really a coder, I was just a hacker, but the task sort of fell on me to, like, build this, this new ocean tool. So I went on a quest, and I tried out all these tools. And because I really didn't want to, I didn't want to reinvent the wheel, like I kind of wanted to try and find a framework to build on so I wouldn't have to start totally

from scratch. So yeah, I tested out a lot of different tools, and that's how I found spider foot. So spider foot is very critical to understanding recursion, and it's very foundational to this talk. So basically, spider foot wasn't really a pen testing tool. But out of curiosity, Has anybody used spyderfoot in here? OK, yeah, OK, so it's been around for a long time. I think spyderfoot has been in active development for like, 12 years, but yeah, it's not really a pen testing tool. I know a few pen testers who have used it, most of them don't use it every day, because it's geared more towards Intel. It's like very there are lots of modules that, like, you

know, look up a phone number, like a human name or an address. It's kind of creepy, but, but, yeah, what surprised me about spider foot was really its design. So, so So basically, when I looked under the hood and I started to read some spider Fritz code, I was very surprised at what I saw, because Spyder Fritz design is totally unique. It's different from any other tool. And actually it's, it's kind of genius. So okay, so this is this I sort of sketched out the workflow. This is like the workflow that we use, typically when doing Osint, or sort of any external engagement, where you're gathering subdomains, and then you're sort of like moving forward and sorting unique them,

do a port scan on them. And then after you have the open ports, you take those open ports and you visit the websites and do all this stuff, screenshots, whatever, and then you end up with a whole bunch of URLs. So it's sort of like this. One way process spyderfu does not work like this. So instead, spyderfut has a whole bunch of different modules, and each module consumes a certain type of data and does something with it to discover more stuff, and then it emits a different type of data. And all of these modules, which there are, like 200 of them, they all feed into each other and constantly just churn, sending back and forth until there's basically nothing left

to discover. Okay, so this, this is an example of three modules, and again, there are, like over 200 in spider foot. But this is just sort of like a simple example. If we enabled three modules. We have an SSL cert module, Nmap, and then an HTTP module, which just visits a website. So in this example, blue is a host, and so you see the hosts are going into Nmap, and then NMAP is doing a port scan, and it's emitting red, which is open port, and the other two are consuming those, but then they consume them and produce hosts which go back into Nmap, okay, so to illustrate this, we have an example with these three modules. Our target is Evil corp.com so we see the

scan with that data, and since it's a host, it goes into Nmap. NMAP Port Scans. It finds open ports, so SSL cert visits those open ports, pulls down the SSL certificate, extracts host names, okay, HTTP visits the website extracts host names, end up with more hosts which go into Nmap, which produces more open ports, which produces more hosts, which gets more open ports, more hosts, more open ports, and so on and so on and so on. And this is recursion, okay, so this is, like, the big idea, right? And this is why spider foot is so unique. Also, this is kind of creepy, I know, but I love this GIF so much. It, like, illustrates exactly what I've tried to say. All right,

so, okay, now spider foot wasn't perfect, like, like, I said, like, it's not really a pen testing tool, and it was missing some functionality that we needed. So I knew, like, if we used it, I'd have to write a lot of modules for it, and there were some other problems. Like, it was also kind of slow. I knew I'd have to, like, probably optimize it. And it was, it was also kind of buggy sometimes, like, sometimes it gets stuck spidering and just run forever and never stop. But like, what was important was that it was a nice framework, and it was a good, solid idea, and I felt like it had a lot of potential,

and so I sort of picked it as, like, the foundation for, for like, my project. So yeah, fast forward six months. This is during COVID. So basically, like, it was during COVID When I was, like, in my in my like apartment, and hadn't left for like, a month, and was just like getting super obsessed with spider foot and recursion and just going down a rabbit hole, and it was totally not healthy. Basically, I spent a lot of time working on spider foot. I deved on it, wrote a lot of new modules, overhauled some parts of the code, but so. Of like, kept digging myself deeper and deeper and realizing slowly that it wasn't going to work.

Like, like, basically, things needed refactored that were affecting other things that needed to be refactored. And sooner or later, basically, I just realized that it would be easier to just write undo spider foot like, just build a new one. Friendship ended with spider foot. Okay, so this is how be bot came into be. So, so this talk is about bee bot. But, okay, so, so this is a first, this is the first bee bot scan. And this is the format of all the demos in this talk. It's going to be terminal on the right and graph on the left, OK. So what you're looking at here is a recursive DNS resolution of tesla.com so you can see, you can see the recursion happening

like in the graph. And this is just DNS resolution. So this is like, this is before I wrote any modules. This is just like a base scan, basically. But this was, this was very promising, and I started to get really excited, but this was before I read into all the recursion bugs. All right, so DNS was the first feature, which means that DNS bugs were the first ones to arrive. So here we go. I

it. So, yeah, something is going wrong here. If you, if you can, like, envision the first demo, like the first little DNS thing, that's what it should look like, right? This definitely does not look like that. So, so what is happening here? Well, if we put that data into Neo 4j we can actually see what's happening. So what we did is we seeded the scan with Evil corp.com and we had some C names. Who knows what a DNS CNAME is? Okay? It's basically a redirect, right? So, so we have is we have a CNAME to a CNAME, and we have this sort of chain of C names that goes around and at the end points right back to our target. So you

can see why. If you have a recursive DNS resolution algorithm, this might mess with it, because it's a loop. Now this should never exist. It really should never exist. And if you, if you know DNS, you know it should never exist. And I beg you, please don't ever do that. But I can tell you that there are people who have done it, and you know, maybe they're just chaotic evil. I don't know why you would do that, but we ran into it, and it really screwed with our scans. So yeah, this was sort of the first, the first rumbling of many, many bugs that I would encounter. All right, here's another DNS bug.

Unknown: Oh, I

Joel Moore: OK. Well, what is happening here? OK, so if you're a pen tester, or if you're an O center of any kind, you have probably encountered this very bug. But this bug is not as bad. If your tool isn't recursive, but if it's recursive, you can see that it just keeps going and keeps going. So this is an actual This is an actual DNS brute force against GitHub to IO. Now github.io is a wild card domain, which means that any subdomain that you look up, it will give you an answer for so what that means is, if I look up ASDF, asdf.github.io, it'll give me an IP address back. So not a huge deal, right? Most of the

time, you just clean out the wildcards at the end. Do some sort of, like, hacky Python scripts to, like, clean them out. Not so with recursion, because what's happening is you have subdomains of subdomains of subdomains, and every subdomain gets brute force again and again and again, and it will never stop. It will run until the end of time. So that's what's happening here. So, yeah, here's a here's a dig showing, like, sort of the proof of concept, like, clearly, that subdomain does not exist. There you go. There it exists. Here's an IP address. So, so DNS wild cards really mess with us doing recursive enumeration on the internet, and you run into a DNS

wild card. It is catastrophic if you don't handle it exactly right. And this, these were not the first bug or not the last bugs. So, yeah, they kept coming. DNS was the first one, and then once we added web, it opened this whole new dimension of terrible possibilities, and the bugs just kept coming. So, yeah, basically, there's this, there were, there were times, there were times, for example, that we would have we. We'd have a scan that was running, and we'd enable the web modules we had. We had like a DNS brute force. We also had a directory brute force for like, dirt busting, and we also had nuclei. So what would happen is we'd have this customer that has a

whole bunch of websites, and we would execute beebot against them, and each one would get like a der bust against it. So we'd be getting URLs, and all those URLs would be coming in, getting fed back in, and we'd be running nuclei on them, but then we'd find out later, they'd come to us like the websites, all of our websites are down. Why are they down? Well, it turns out that all these derbus that beebot was doing and all the URLs that were getting run with nuclei, all those subdomains pointed back to the same web server as just a virtual host and so, so yeah, it basically accidentally DDoS their server. So yes, that also happened.

Bebot had this problem. Basically, you'd start a scan, everything would be going normally. And maybe I'd like, get up to, like, get coffee or something. Like, when I came back, it just be doing something completely uncalled for, and I just have to, like, frantically cancel it. So I basically, like, learned early on that, like, during the creation of this tool, like, it was very important that I like, monitor the scans and, like, watch them to make sure they don't do anything crazy. So it wasn't just DNS wild cards, so Cloud, like, cloud providers, also gave us a big, a big issue. So like, Have you ever run a scan and you've accidentally, like, maybe you've run like, a mass or something like that, and you've

accidentally included a cloud domain? So maybe you have, like, a storage bucket in your in your target list, and you're accidentally enumerating like all of Amazon AWS. So yeah, that can be a problem. Here's here's one that this sort of illustrates the cloud issue. So this starts off as just like a normal subdomain enumeration of SpaceX, and it's not, it's not until about 30 seconds in that something starts to happen.

So you can kind of see everything, everything that's branching out from this sort of central cluster. This is all related directly to spacex.com These are all subdomains, URLs and stuff from from SpaceX. What we start to see here is we start to see this like appendage spin off on its own and start, sort of start growing. And this is, this is, this is not spacex.com so what happens is this scan mistakenly recognizes that spacex.com is using Azure for a lot of its hosting, and it mistakenly includes azure.com in the scope and starts enumerating all of azure.com which is huge, by the way, and You don't want that. So, yes, this was also an issue. There's this one time

that something like that happened and we accidentally scanned a.gov so this, this was especially bad because we found a critical RCE on that. Vbot has a few modules that will look for things like like machine keys, like compromised machine keys, or like weak secrets on your web app, which a lot of the time can get you RCE. And so this was one of those cases. I can't talk a lot about this, except to say that it was very awkward reporting it, and it got exploited before they could fix it. And it was, it was a whole thing. So anyway, these bugs, these bugs in general, like, I had run into these before, in spider foot, right? They were kind of typical for spider foot,

but I should have like, like, like, they weren't that bad, like, spider foot, most of the time works great. And so I think I should have, like, recognized some of those bugs as sort of a warning that, like, recursion is kind of Dark Territory. But I kind of had to make my own mistakes. You know, had had to try for myself and get burned. I

All right, again, simple enumeration of SpaceX, nothing to see here. Wait for it. I'm

okay, what is this? I. What is happening here. Some of you may actually know what this is.

All right, you can see very clearly that it's not stopping, and that, if you look in the terminal, and that's kind of hard to see, but on the terminal, these are all URLs, and these are all the same website, OK, so what is this? This is a an infinite redirect. So there are certain cases where a site will you'll visit a website, it'll redirect you, and because of your cookies, or whatever reason, that redirect page will redirect you again. And there are cases when that happens infinitely, and every time it's a new URL, so it's like randomly generated again. I don't know why you would do that, but people do it, and it happened to us, and that's what

happened. That's what looks like. So recursion plus infinite redirect equals tendrils of doom. These were web bugs. We ran into a lot of web bugs. Web was probably worse than DNS. Like DNS wild cards give us a lot of trouble. Web was even worse, though we accidentally made a web spider like three times. And this sounds kind of silly, but like the fact is, is that, like when you have a recursive tool, and the recursive tool deals with URLs, all you need is two modules. Basically, you just need one module that visits the URL, and then one module that extracts URLs from the response, and then those URLs get fed back into the first one. It's just infinite

loop web spider. So, yeah, if you're curious what a rogue web spider looks like, this is what it looks like. I'm

this is bbot spidering the Linux kernel documentation. You can kind of use your imagination. It could be like bacteria or like, you know, some sort of, like, virus replicating, I don't know.

Okay, so it was at this point, you know, when the bugs were sort of just like, really unstoppable, that I kind of had to take a step back and ask myself, like, whether recursion was really worth it. Because, you know, like, I started, also, I started, like, sort of like, daydreaming about those other tools I used a long time ago that weren't recursive, that worked so well and never had these problems. Like, maybe, you know, maybe other people's githubs weren't so bad after all. So, yeah, some self doubt, I think, like, in the back of my head, I'd always, I'd like wondered why spider foot was the only recursive tool like Osint is clearly a recursive problem.

So, like, you know, why isn't every tool recursive? Well, you know, there's really important reason for that, and this is what I was discovering, which is that recursion is great, but the internet is not a safe place, and the argument could be made. In fact, I would make the argument as someone who has written a recursive internet scanner, I would make the argument that the internet is all it's 100% user input, like it's user input all the way down the internet was created by humans, and it's all user input, and you're ingesting into your program. So, like, you're basically fuzzing every time you scan the Internet, you're fuzzing your program, essentially. So yeah, so basically, I cut, sort of in my

in my arrogance, I kind of like thought that by starting fresh and being very careful, I could sort of avoid these problems that spider foot had had. But you know, the reality is that, like, if you build an internet scanner, and you make it recursive, it's going to just do crazy shit, and there's not going to be anything you can do about it. So, yeah, so it's faced with a choice. I basically, I basically had to decide whether I wanted to try harder and possibly lose my mind, or just give up on recursion and just go back to the simple, linear way of doing things. And, you know, the whole, the whole like B body experiment would kind of just be

a character building exercise. And, you know, maybe I could put up my resume as, like, a cool failed experiment. Who knows?

So, how do you do it? How, like, obviously, I wouldn't be giving this talk if I gave up. So, so, how do you do it? How do you tame a recursive internet scanner. All right, so unit tests? Has anyone heard of unit tests? If you haven't heard of unit tests, I envy you, ok, but it's true. Basically this unit tests were, like a huge reason why we were able to actually get bbot off the ground. So every single one of those bugs that you saw, we have a unit test written for it that will fail if the bug like gets reintroduced. So basically, as we were finding these bugs, every time we found one, we'd write a unit test for

it, and we'd get it to get, to pass, and sometimes it would break another one. Yeah, but this is why you have unit tests. Every time you push code, it makes, it makes so that you have to have every single one passing before you can merge it. So yeah, unit tests, huge. Another thing also was just like learning how to think of these problems in terms of recursion, because, like we as humans, we don't naturally think recursively about problems. We sort of, like the way that we see it is, like, we sort of see it on the surface. And we're like, okay, we don't generalize. We like, Oh, I see this edge case. So I'm gonna, like, add an if statement that, like, handles

this edge case. But like, it's really important when you have recursive tool to, like, generalize, step back and ask yourself, OK, why did this happen? And what other types of edge cases might be similar to this, and can we just fix them all in one place? So, yeah, learning to think recursively, which is sort of sort of an unorthodox way of thinking, was very key in this as well. OK, so I want to show you something. This isn't really a demo, but so, So has anyone ever heard of trickest, or used trickest? Okay, that's okay. It's pretty new. So it's basically a platform, and it allows you to automate things, so using things like bash scripts. Okay, it's

mainly for security professionals. The bug bounty community uses a lot so, so this what we're looking at right here. This is a trickest workflow for subdomain enumeration. So if we zoom in here, we can see that there are some nodes, basically, where you give it a list of domains, and it takes those domains and it puts them into a whole bunch of different tools. So you may recognize some of these tools. These are just common sub domain enumeration tools, sub finder, find domain, asset finder, command line tools, runs all of them. It's sort of like, sort of like, if it was a bash script, you'd be like, running them, sorting unique them, similar to like the diagram I showed before, but basically it's this

linear pattern where we have all sorts of things like passive enumeration. We have Word Lists doing DNS brute force doing permutations on the subdomains. And the goal, of course, is just to generate as many subdomains as possible. If you're a bug bounty hunter, this is like super important. Again, finding the most hard to find subdomains gives you an edge, because those are most likely to be the juicy ones. And if you find a juicy target and you exploit it, you get paid. So so this guy kindly made a trick is workflow to do this, but, but I want to sort of like dwell on this for a second, because if we think about the way that this works, where it's sort of this one way flow of

data, where you slowly get more and more and more subdomains where at the end you just have this, you have this big list, and you're like, All right, we're done well. The thing is, is every single one of those subdomains has the potential to be used to discover more. So if we discover like on the if on the last step, we discovered a permutation that resulted in a new subdomain, like, we'd want to start over with that. We want to go back and visit that website and do a port scan on it. We'd want to do permutations of that permutation and all this stuff, because, you know, every single one of those could potentially lead to more data. But in this system, there's no

way to do that. So what you'd essentially have to do is you'd have to, like, copy this, you know, stick it on the end, and you'd have to just do it again and again. This is, this is an example of of, of like, things becoming more and more and more complicated because you're not thinking recursively. So instead of this big, this sort of like assembly line of processes, instead of that, we just have as a simple circle that just goes around and around and around, and that's what recursion is, OK? So who would win this ultra sophisticated pipeline, or one recursive boy? All right, so I took the liberty of running a subdomain enumeration using this workflow and using beebot, and I

use, I just picked some big domains. So I picked dell.com so as you can see, bebot found around 18,000 domains, while trick has only found about 12,000 and this, this isn't really the interesting part, because, like, you know, obviously, like, we can, I can show this graph. It's like, oh, it's a nice, it's a nice graph. You know, be bot finds more. But if we actually look, if we this is what it looks like in Neo 4j by the way. So if we actually like, diff, those two lists. So we take the subdomains, we take all the trickest ones and all the bebot ones, and we find a subdomain that beebot found that trickest didn't, and then ask bebot How it found that. Okay,

this is, this is where it gets interesting, because you start to see these big, long discovery chains. So, so here's a discovery chain for this one subdomain of dell.com and this is, this is, this is a real discovery chain, like this is a real subdomain. It exists. So we seeded the scan with dell.com B bot hit an API. It got some subdomains. So it found a subdomain. Here. It found an open port on that subdomain. It visited the website in the location header. It found a URL, so it visited the URL, and then it took that URL and it went one directory down, visited that, got a response from that, found another URL in the body, and then pulled the DNS name out of

that URL, and then that DNS name had a CNAME to this, ok, So like, how would we have discovered this without recursion, right? Like, you know, a human could have done this, but it would have taken us, you know, how long would it have taken you to do it for 18,000 subdomains, right? So, this is, this is recursion at work. Here's another one. So, seated with dell.com searched an API, found an IPv six quad a record, found a pointer record for that IP address that pointed to a another host name. Found an open port on that IPv six address, found an SSL certificate on that port, found a quad A record pointing to an IP address, looked beside that

IP address and found an IP neighbor found the pointer for that IP neighbor contains a DNS name, which chains contains a quad A record which has an open port, which has an SSL certificate, which has a DNS name. Ok. So like, these are the crazy discovery chains. And this is how bebot works. Every single piece of data that be bot discovers has this discovery chain. And it's very interesting. You can just look at it and see exactly how it found it. Here's paypal.com paypal.com over twice as much, over twice as many subdomains. So, yeah. So again, this is this goes back to like the superpowers, like, this is the superpower. So if you're a bug

bounty hunter, if you're a red teamer, if you're a blue teamer, if you're a blue teamer, you can scan your company with this, and you might actually find things you don't know about. This happens all the time. We like we bring on a customer, we scan their their external web presence. We find a website that's vulnerable to RCE, and we bring it up to them, and they're just like, whose is this? Who has this website they don't like, they didn't know about it. You know, it was supposed to have been turned off two years ago, and it's still there. So, yeah, if you're a blue teamer, if you're in threat intel, you can use this for collection like this gives you a huge edge. And

this is not, this is not a paid tool. This is an open source tool, and this enumeration was done without any API keys whatsoever. So you can literally just pip install be bot on your laptop and get these kind of results. So, yeah, OK, so this is, this is the takeaway, you know, if you, if you have to remember one slide, this is definitely the one to remember. It's a Python tool. It's open source, and, yeah, you can just pip install it. So, yeah, it's a command line tool. But we also have a, oh, I should actually say, so there are many outputs that you can output it to. So if you're like, if you're a blue

teamer, if you like working in Splunk or elastic, so you can totally output to those formats. We output to Neo, 4j, in addition to, like, the usual text, JSON, CSV, all that kind of stuff. It's also a Python library. So if you like Python, your coder, you can import it and run a scan. This is, it's like, you know, five or six lines of code, pretty straightforward, all right. So this is my last slide, sort of just a fun little demo, something you can make out of B bot. I made a little discord bot.

So there you go. You can scan stuff from discord. All right, that's it. Thanks.

All right, so I just want to say thanks to all the other people have contributed to be bot. So I'm the main author of be bot, but there have been so many cool modules written by the community. And also, I want to thank Steve, Steve McAuliffe, who is the creator of spider foot, who should definitely have all the credit for for this design like he is a genius and he invented the whole recursive system. Them in spider foot that V bot makes use of. So, yeah, I'll take questions. Now, if anybody has questions, go ahead, what Unknown: do you find on false positives? Was it proportionate to the size of V bot versus the other app, or was there any way

of Joel Moore: working for that stuff? False positives as in, like, as in, like, like domains, like subdomains, or subdomain enumeration. Unknown: A lot of times you pick up things that are supposedly

Joel Moore: supposed to be real. Yeah, yeah. Good question, yeah. And it's a huge issue, right? Because, like, if you ever run a mass like you always have to clean it at the end, you always feel like, okay, this subdomain doesn't resolve to anything. So it's not it doesn't exist, right? So, yes, yes, I created bebot specifically to deal with that kind of issue. So bebot, by default, will only output things that are guaranteed to exist. In other words, it will only output resolved subdomains. So yeah, maybe, you know, maybe it only has a text record, or only has, you know, a quad a record, but as long as there is one type of record, it will output it. Yeah, good question. I.