The “Attacking Active Directory Game” is part of a project where our researcher Ondrej Lukas developed a way to create fake Active Directory (AD) users as honey-tokens to detect attacks. His machine learning model was trained in real AD structures and can create a complete new fake user that is strategically placed in the structure of a company.
Machine Learning Leaks and Where to Find Them
Machine learning systems are now ubiquitous and work well in several applications, but it is still relatively unexplored how much information they can leak. This blog post explores the most recent techniques that cause ML models to leak private data, an overview of the most important attacks, and why this type of attacks are possible in the first place.
Creating "Too much noise" in DEFCON AI village CTF challenge
During DEFCON 26 the AI village hosted a jeopardy style CTF with challenges related to AI/ML and security. I thought it would be fun to create a challenge for that and I had an idea that revolved around Denoising Autoencoders (DA). The challenge was named “Too much noise” but unfortunately it was not solved by anyone during the CTF. In this blog I would like to present the idea behind it and how one could go about and solve it.