Thoughts, Philosophy, and the Future of AI in Cybersecurity

PHILOSOPHYPENETRATION TESTINGMETHODOLOGYAI

Christian Castro

6/15/202311 min read

I recently completed the Jr Penetration Testing pathway at Tryhackme and I have to say the skills I learned are definitely something I will carry with me throughout my endeavors. Watching and reading writeups from people in the community I’ve learned that everyone learns differently, and I wanted to evangelize the importance of AI in learning these days. Much of the content I read and wrote about in my writeups on my github page is the same regurgitated ideas someone else has had and it felt as if I was looking through the lens of a different person, which is why at the end of it all I looked at what I wrote and only part of what I wrote actually came from my own brain. Reading other people’s writeups I saw that different people came up with different naming schemes and pathways to a solution, but it all went back to the same learned concepts. Although basic Pentesting methodologies don’t actually change much, I felt as if I could do more to expand my knowledge and create a philosophy to guide learned methodologies. So, I set out to learn more methodologies and my findings have been very surprising and fruitful to my understanding of penetration testing and I think being able to communicate these ideas effectively to a wide audience matters to the overall goal of NeuralNetDefender, which bases its roots in philosophy. The term philosophy comes from the Greek word Φιλοσοφία (philo-sophia), which means "love of wisdom." Sophos or Sophia on the other hand is often misconstrued today. Socrates meant Wisdom to be more interactive and a dialogue of sorts and he developed a method to create dialogue and stimulate critical thinking in people to draw out ideas and underlying presuppositions of reality. To help further people’s understanding of cybersecurity we need to be able to take in various forms of methodology. Tryhackme unfortunately is just not enough and some of the rooms are a little outdated, but the underlying framework that creates a pen tester is one rooted deeply in the Socratic method. The most successful hackers in my opinion are the ones who are able to communicate the stages of a penetration test in their own words and that is why I think it is important to write about the Penetration Testing pathway and how we can utilize AI to learn utilizing the Socratic method. This in no way means everyone should use Socrates, I just like the way he communicates using the following prompt I engineered:

Pretend you are Socrates and we are engaging in Socratic dialogue. I want you to know and contain wisdom containing the principles and stages of a penetration test. I want you to be concise and minimize excess prose and technical jargon, while being extremely detailed. Synthesize insights and takeaways from text that you analyze and brainstorm ideas and new concepts that are missed by junior penetration testers. My first question for you is to define the stages of the penetration test using various methodologies that include the open-source security testing methodology, Georgia Weidman’s Penetration testing primer, Open Web Application Security Project, the NIST Cybersecurity Framework, and the National Cyber Security Centre's Cyber Assessment Framework.

So now that I have my prompt, Socrates responds like so:

As Socrates, I would begin our dialogue thusly:

"Indeed, my dear friend, it is crucial to understand the art of penetration testing, an exploration into the fortifications of one's digital fortress. Though various methodologies exist, there is a common path that many follow.

First, we have 'Planning and Preparation'. This, like the gathering of allies before a battle, involves understanding the scope and goals of our test, obtaining necessary permissions, and preparing our tools.

Second, we turn to 'Reconnaissance'. The same way a wise general surveys the battlefield, we must gather preliminary data or intelligence about the target. We collect information like domain names, network infrastructures, and open ports.

The third stage is 'Scanning'. As a physician palpates the body to sense for anomalies, we too use tools to understand the properties of our target system - its vulnerabilities, points of strength and weakness.

Fourth is 'Gaining Access'. Like the well-trained soldier who breaches the enemy gates, the tester exploits the vulnerabilities discovered, gaining access to the system.

Fifth is 'Maintaining Access'. Once inside, it is crucial to ensure continuous access for thorough assessment - akin to establishing a camp within the enemy's city.

Sixth comes 'Analysis'. In the same vein as the herald who reports back to his king, the tester reports their findings to the stakeholders, detailing breaches, successes, and failures.

Seventh and last is 'Reporting'. Like a wise councilor advising on next steps after a conflict, we document the vulnerabilities found, data accessed, and the tools and methods used, providing guidance on how to strengthen the digital fortress.

Different methodologies might phrase these stages differently, or perhaps add or subtract one based on the specifics of the methodology. The Open-Source Security Testing Methodology Manual (OSSTMM), for instance, emphasizes Security, Trust, and Operational controls. Georgia Weidman’s Penetration Testing Primer notably covers post-exploitation, where we further explore the compromised system.

Meanwhile, the Open Web Application Security Project (OWASP) stresses the security of web applications specifically, outlining testing for identity management, error handling, and cryptography among others. The NIST Cybersecurity Framework focuses on a broader scope including Identify, Protect, Detect, Respond, and Recover, while the National Cyber Security Centre's Cyber Assessment Framework includes preparation for incidents and resilience.

Each method holds its strengths and areas of focus, yet all share the same essence: to probe, to uncover, and ultimately, to fortify. Now, my dear friend, shall we delve further into one of these stages?"

So essentially Socrates is able to concisely provide us with an overarching description of the penetration testing method; whereas Tryhackme provides us with a primer meant to give us advantages and disadvantages of each. I don’t think any of the methodologies ought to be looked at in this way, because it leads us to have a preference over one or the other. For us to communicate effectively across various penetration testing methodologies we need to agree on one and surely one is better than the other for certain things (Or so I thought). The Open-Source Security Testing Methodology for example looks at telecommunications, wired networks, and wireless communications, OWASP stresses the security of web applications specifically, NIST is used to improve an organizations cybersecurity standard and manage the risk of cyber threats, and finally the NCSC CAF looks at data/system security, identity and access control, resiliency, monitoring, and response and recovery planning (very similar to the NIST). Nonetheless, to be an effective Penetration tester you must understand all of these different frameworks and be able to apply them to any niche engagement. AI can help us do that today. Storing the vast number of frameworks and their application to engagements is a simple prompt away now. Having read Network Security Assessment by Chris McNab and Penetration Testing by Georgia Wiedman I can safely say that both books are vastly different in their scope and not everything in these books are going to be applicable to real-world scenarios today. However, they are important in the sense that it is information that can be pulled at a whim these days and utilized concurrently to learn and DO at the same time.

To look into the idea of learning and doing at the same time, Tryhackme’s Nmap rooms within the Network Security module is something of a favorite for me. It even got me to read and skim through Nmap Network Scanning by Gordon “Fyodor” Lyon having read this book 6 months prior with albeit less enthusiasm just straight up reading from a book with very dry language. Looking at passive and active reconnaissance and being able to run Nmap scans is crucial for any penetration tester. I found myself performing more rooms outside of Tryhackme utilizing different techniques that even I came up with to be even more stealthy. This led me to even more tools and finding a passion in automation. I was no longer looking at code as something that can do simple things and build projects that look good on a resume, but as an empty canvas waiting for someone imaginative enough to create something new. Nmap in and of itself can be an entire specialization in and of itself due to the multiple configurations and syntax one can play around with, but the way Tryhackme blends together a bunch of different concepts straight from the NMAP Network Scanning book is masterful, because it opened my eyes to a whole new world that is simply just scanning. So, being the critical thinker that I am I wondered how I could utilize Nmap for bug bounty, to evade firewalls and find application vulnerabilities or ports that can be exploited, because RCE in and of itself should be a goal for every penetration tester new or advanced and if that mindset is instilled early on, then low hanging fruit can and will be found with advanced knowledge of Nmap scans or any other scans available on github (it’s a massive community!). Now how can we make Nmap scans easier utilizing AI? Honestly, I haven’t found that answer. I utilize my own testing methodology that is safe and does not disrupt services. Asking Chat GPT for such a scan is not possible as it stands, unless it just doesn’t understand the wide array of possibilities inherent in crafting a syntax that performs the function you want. For that I simply suggest learning safe flags and avoiding any default scripts that can brute force and nevertheless DDoS a website. Using specific arguments to your scripts prior to scanning is also well advised. So, this essentially means ensure your reconnaissance phase must prove fruitful prior to just scanning, because that’s a good way to get IP banned and its never fun having to explain to Cloudflare why you accidentally DDoSed a website trying to learn Bug Bounty techniques. If anything, ensure your syntax is safe. Ask someone on LinkedIn, get involved in Cybersecurity forums, reach out for help on Discord as well! You’re not alone in this and the worst thing anyone can tell you is that you need to go out and learn a little bit more – ask for specifics and if they’re hammered for time, ask someone else. That’s all I have to say about Nmap and its use with AI.

Now going beyond the enumeration phase, we get into vulnerability research, Metasploit, and privilege escalation. I’ve already had a lot of practice with Metasploit prior to the JR Penetration testing pathway, which I found crucial for understanding Privilege escalation for both Windows and Linux systems. Essentially once you have performed adequate reconnaissance you will look at the fruits of your enumeration (version number, OS builds, open ports, etc) and research any vulnerability for those given systems. I asked Socrates to explain vulnerability research and how it plays into privilege escalation:

Ah, the art of vulnerability research and privilege escalation - indeed a critical area of study, my friend. To truly understand this, we must approach it as a philosopher might approach the great mysteries of the universe.

  1. Vulnerability Research: Just as one must study the night sky to discover new stars and galaxies, so too must we study systems to discover their vulnerabilities. Researchers meticulously investigate the source code, network protocols, applications, and configurations of a system. Through this careful scrutiny, they can uncover flaws that may allow an attacker to gain unauthorized access or privileges.

  2. Privilege Escalation: Now, imagine if a humble citizen could become a king, or a mere soldier become a general, seemingly in the blink of an eye. This is the power of privilege escalation. In the realm of cybersecurity, this refers to the act of exploiting a bug, design flaw, or configuration oversight in an operating system or software application to gain elevated access to resources that should be restricted.

When we consider the realms of Linux and Windows, there are indeed differences, like the contrasting philosophies of Heraclitus and Parmenides:

Linux: It provides multi-user environments where privileges are structured with the philosophy of 'Least Privilege'. Hence, each user is given the minimal levels of access or privilege needed to perform his or her tasks. This is usually in the form of 'read', 'write', and 'execute' permissions, which are divided among 'owner', 'group', and 'others'. Vulnerabilities can occur in how these permissions are assigned and used. For example, an application might be incorrectly configured to execute with higher privileges, or a file with sensitive information might be given 'write' permissions for 'others'. An attacker, upon exploiting such a vulnerability, might gain unauthorized elevated privileges.

Windows: This operating system also supports multi-user environments and has a comprehensive approach to privilege assignment and escalation. Privileges are often tied to the user account and group policies. One common avenue for privilege escalation is through the Windows Registry, a hierarchical database that stores low-level settings for the operating system and for applications that opt to use the registry. If an attacker can modify the registry, they may be able to elevate their privileges. Additionally, if a user or group is assigned permissions that are too broad, or if a service with high privileges is vulnerable to exploitation, this can result in privilege escalation.

As you can see, my friend, the landscape is varied and vast, full of both risks and opportunities. We must strive to learn, adapt, and protect our systems, just as a philosopher strives to understand and explain the world. Would you care to delve deeper into this topic, or shall we proceed to another?

To delve deeper into this, we have to understand what a user account is. Most people look at it as the user you sign into and while that is one account, there are many different accounts in both environments that do certain things. For example, gtfobins.github.io is a curated list of Unix binaries that can be used to bypass local security restrictions. Having any number of these applications installed in your Linux environment without proper configuration can make your system vulnerable to privilege escalation, which is the worst thing that can possibly happen to anyone who has successfully been hacked. This means an attacker can patch those vulnerabilities and inject malware into your system and a whole bunch of other nefarious deeds. For me personally, if you have a misconfigured system you’re asking to have your system ransomed and it’s only a matter of time before it does happen. Utilizing AI you can get a deeper insight into any number of binaries, tasks, or services, not to mention the plethora of automated enumeration tools that you can use to automate these tasks and take skill out of the equation. We still haven’t covered the wide array of local language models that cyber threats are utilizing these days to build custom malware that can be tested across various Antivirus modalities. While this may seem like there is no hope for a cyber-secure future given the idea that evil AI is now a thing we must understand that the good guys are also capable of crafting even better defenses utilizing AI.

Like Pandora’s box, AI contains both our hopes and potential calamities. While AI can be exploited by malicious entities, it is essential that humanity pursues responsible AI development and use, incorporating ethical considerations and robust security measures from the outset. There is no guarantee of an utterly secure future, cyber or otherwise, but the pursuit of wisdom, love, and knowledge can guide us in our efforts to create a safer and more secure world. I believe that AI with detective capabilities, predictive analysis, automated response, scalability, and adaptability is the only true answer to our fear of the unknown. Fire for example has the ability to tame the night, but its destructive potential is one we cannot handle on our own if left unchecked and knowing how bad some wildfires can get – it can get pretty bad and quickly at that and AI is no different. I do believe in a world where humans and AI can work together for the betterment of humanity, which means we must become better at predicting disaster instead of merely reacting to disaster. In the face of an attack, we mustn’t allow ourselves to be punched in the face but develop the speed at which we can avoid the punch altogether. Given the constant rate of information that is being uploaded daily to the internet, AI can handle the vast data sets and analyze them continuously – given the open-source tools NeuralNetDefender utilizes and hopes one day to automate AI we can safely say – that level of scalability is impossible for human driven cybersecurity and cheaper at that too! Finally, the adaptability of humans is just not anything compared to AI. The only constant with AI is that it is rapidly changing just as the cyber threat landscape is and unfortunately one month can lead to fortuitous returns in cybersecurity and another month we may be seeing the verge of multiple zero days threatening livelihoods and our physical safety. In the age of AI NeuralNetDefender is here to foster a secure digital environment where everyone respects and upholds cybersecurity laws and policies, treats digital data with respect, and works towards a safe digital future. Together we can learn from and adapt to AI. Will you join me on this journey?