Application whitelisting is a useful defense against users running unapproved applications. Whether you're dealing with a malicious executable file that slips through email defenses, or you have a user that is attempting to run an application that your organization has not approved for use, application whitelisting can help prevent those activities from succeeding.
Some enterprises may deploy application whitelisting with the idea that it prevents malicious code from executing. But not all malicious code arrives in the form of a single executable application file. Many configurations of application whitelisting do not prevent malicious code from executing, though. In this blog post I explain how this is possible.
I've been working on a presentation called CERT BFF - From Start to PoC. In the process of preparing my material, I realized that a visualization could help people understand what happens during the BFF string minimization process.
What does it mean to say that an indicator is exhibiting persistent behavior? This is a question that Timur, Angela, and I have been asking each other for the past couple of months. In this blog post, we show you the analytics that we believe identify persistent behavior and how that identification can be used to identify potential threats as well as help with network profiling.
As you may have read in a previous post, the CERT/CC has been actively researching vulnerabilities in the connected vehicles. When we began our research, it became clear that in the realm of cyber-physical systems, safety is king. For regulators, manufacturers, and the consumer, we all want (and expect!) the same thing: a safe vehicle to drive. But what does safety mean in the context of security? This is the precisely the question that the National Highway Transit Safety Administration (NHTSA) asked the public in its federal register notice.
We worked with DHS US-CERT and the Department of Transportations' Volpe Center to study aftermarket on-board diagnostic (OBD-II) devices to understand their cybersecurity impact on consumers and the general public.
One of my responsibilities on the Situational Awareness Analysis team is to create analytics for various purposes. For the past few weeks, I've been working on some anomaly detection analytics for hunting in the network flow traffic of common network services. I decided to start with a very simple approach using mean and standard deviation for a historical period to create a profile that I could compare against current volumes. To do this, I planned on binning network traffic by some length of time to find time periods with anomalous volumes. The question I then had to answer was, "How should I define the historical period?" In this post, I explain the process I used to answer that question.
It has been nearly a year since the DevOps blog launched its own platform. In the nearly 12 months since our launch, we have offered guidelines, practical advice, and tutorials to the ever-increasing number of organizations adopting DevOps (up 26 percent since 2011). In the first six months of 2016, an increasing number of blog visitors were drawn to posts highlighting successful DevOps implementations at Amazon and Netflix as well as tutorials on new technologies such as Otto, Fabric, Ansible, andDocker. This post presents in descending order (with #1 being the most popular) the 10 most popular DevOps posts published in the first six months of 2016.