Part 4 of a 4 Part Blog Series
Originally published by Medigate www.medigate.io. Reposted with permission.
This blog concludes our series on how to start to think about and implement a successful Clinical Zero Trust (CZT) strategy. We will cover the easy and hard components of phase four and five – Monitor and Automate – of an organization’s journey to establish and maintain an effective CZT stance.
To recap, the first blog defined CZT and outlined the five phases organizations typically embark on when trying to create and implement CZT in their clinical setting. The second blog covered what is involved in the Identify and Map stages, while the third blog covered the Engineer stage. So, let’s take a look at these last two phases of a CZT lifecycle.
Phase 4: Monitor
What We’re Trying to Accomplish in the Monitor Stage
The last thing anyone wants to do is introduce security that interrupts or creates a point of failure that adversely affects the delivery of care. The reality is that despite all your best efforts at Identifying, Mapping, and Engineering the network (Phase 1, 2, and 3, respectively) to create and optimize an effective CZT stance, you may have missed something.
The complexity and dynamic nature of clinical settings makes it highly likely that something went unaccounted for – a DNS server may have been missed because moving the IV pump to the floor below was never mapped, or a connection to a backend database may have been overlooked because the link to billing was forgotten. We are human, as such, we make mistakes. The problem is a mistake within a clinical setting can have significant, even life changing, consequences.
That’s why Monitoring before actually implementing the policies and actions you have Engineered is critical. It’s often how you figure out exactly what your security domains look like; it should help you understand exactly what is going to happen, so there are no surprises when you actually start enforcing.
The Unavoidable Medigate Mention
Note, this phase is only possible with a tool, like Medigate, that can enable it. I don’t want to be promotional or sound self-serving, but there really is no other way. Without something that can monitor and understand all the protocols and communications occurring in your environment, in real-time, it is too time consuming and labor intensive to manually monitor and try to figure out what a policy will do.
It takes correlating a ton of different digital outputs from all the different devices involved and mapping them, in real-time, to the physical operations. (Note, this is often why many organizations haven’t implemented a Zero Trust stance to date – they don’t want to deploy security controls that introduce risks to the operations, but they have no way to test to see what their proposed controls will do, so they stick with doing nothing.)
With the ability to Monitor, you have a way to gather the information you need about what the controls you want to implement will actually do to your environment. This will give you the confidence to establish and maintain a successful CZT strategy.
What’s Going to be Easy
The easy part of the monitoring phase (if you have a tool to help you) is identifying when something breaks. You can immediately see when a policy or action is going to disrupt or block something that will have unintended consequences for a procedure or care protocol. Once identified, it can be fairly easy to make the appropriate adjustments – e.g., ensure the server, communication, etc. is allowed - to avoid any problems.
Also easy, is identifying anomalies. Most security devices can tell you when something happens that is unusual or new. As we discussed earlier (in Phase 3), the majority of devices in clinical settings, where care is actually delivered, are deterministic. They are only meant to do certain functions. They are not like desktops or other consumer compute resources that are controlled by unpredictable humans. They have a specific set of capabilities and manufacturer intended behaviors.
If you know what those capabilities and behaviors are, then it is very easy to identify when a device starts doing something or communicating in a way that it shouldn’t. Note, that “If” can be very hard to achieve. The capabilities and behaviors of medical devices are not necessarily easy to know – it takes a deep understanding of clinical protocols, manufacturers, and workflows. But, for solutions that have invested the time and effort to acquire this knowledge, it is then very easy to identify when something is operating outside of what’s normal or doing something it shouldn’t be doing.
While monitoring for breaks or security risks within your CZT environment is going to be somewhat easy, with the right tools, it’s going to be significantly harder to monitor for policy deviation…
What’s Going to be More Difficult
The hard part of the monitoring phase is figuring out what you may have missed. It is much harder to figure out if you mapped the care protocols and built the policies to protect them correctly. You need to ensure your physical and digital flows and boundaries align. While digital flows can be easily tracked, it can be extremely difficult to track physical communications in the same way. Understanding what is happening as it happens in the clinical setting is made that much harder by the complexity and dynamic nature of the environment.
You need to make sure you have accounted for everything when you are building your policies because the stakes are high – you cannot be wrong. You will need to see:
We recommend remaining in the Monitoring phase as long as you need to determine whether your policies and actions are correct. Only when you are absolutely certain they are should you move to implement. Some tips and tricks:
The Automate phase is where CZT is made real. This phase allows you to apply cyber controls to your care protocols and processes, via your enforcement points, and start to reap the benefits of a CZT stance. It covers the ongoing operationalization of your CZT strategy, helping you identify and then define what comes next.
What’s Going to be Easy
Saying you are going to automate is the easy part – that’s it. Actually doing it is very difficult.
What’s Going to be More Difficult
Assuming you have accurately Identified all your devices (Phase 1), Mapped them to all the appropriate cyber and physical flows (Phase 2), Engineered policies to effectively protect those flows (Phase 3), and then Monitored the enforcement of those policies to optimize the safe delivery of care (Phase 4), then you are ready to figure out what comes next.
Basically, you are going to define what the consequences of a policy violation are and then automate those responses. For example, do you want to create a ticket when a new device is detected on the network? Who does that ticket go to? What are the next steps? Or maybe you want to initiate a call to the nurses station associated with that security domain? Or push a rule to the firewall to stop the device from connecting to anything? Or take a more drastic measure and kick the device off the network entirely?
The options are endless – what you choose will probably depend on the data, device, and care protocol involved. The key is, once you know what you want to do, you want to automate that process, so it can be done quickly and efficiently. Operationalizing these processes will take integrations with all sorts of systems across your entire environment, from your ticketing and billing systems to your security devices and CMMS.
Look for those platforms that can facilitate connections across your ecosystem because you don’t want to have to use multiple different tools to automate. The goal is not to just achieve security, but operational security – that’s what ultimately makes clinical zero trust so valuable to the business of healthcare.
You want to make things easier, not harder, so look for solutions that can help you automate by talking to all the different types of systems and equipment in your environment. Note, vendor-agnostic platforms, which have expressly built their solution to integrate with other technologies, will probably be able to support your needs best. You want solutions that can offer broad vendor support across different types of platforms - operational, security, ticketing, tracking, location platforms – to try to maximize the value and minimize the complexity of maintaining your CZT stance.
For more general information, you can check out Medigate’s white paper on Clinical Zero Trust.
Part 3 of a 4 Part Blog Series
Originally published by Medigate www.medigate.io. Reposted with permission.
This blog covers phase three – Engineer – of the planning and roll out of a Clinical Zero Trust (CZT) strategy. This is where the rubber hits the road, so to speak, as we define the actual security controls that we want to implement to mitigate the risks to a hospital’s physical and digital systems.
To set the context, let me remind you this is the third in a series of four blogs that delve into what it really takes to establish and implement a successful CZT strategy. The first blog defined CZT and outlined the five phases that health systems embark on during their journey to creating a sustainable CZT stance. Those phases include: Identify, Map, Engineer, Monitor, and Automate.
The second blog covered what is involved in the Identify and Map stages, and this one will focus on what is going to be easy and hard about engineering CZT policies that effectively protect the clinical setting. (If you’re sensing a cadence, you already know the fourth and last blog will review the Monitor and Automate stages!)
Phase 3: Engineer
What We’re Trying to Accomplish in the Engineer Stage
The goal of this stage is to define the policies and actions that can be implemented to protect the integrity and flow of each care protocol. As we’ve noted before, CZT is not about safeguarding access to data and devices, but the delivery of care.
The goal is to insert controls to defend the smallest “surface area” possible. These are the physical boundaries to the digital flows that you identified and mapped (phase 1 and 2). You are basically keeping intact the combination of devices and processes that need to operate unimpeded and uninterrupted, and inserting security at the boundaries to mitigate risks in a way that ensures you are doing no harm.
Example of What a CZT Policy Covers
Take an IntelliVue MX800. A CZT policy that protects the integrity of the service that bedside patient monitor delivers would include: allow DHCP bi-directional communications and access to the local DNS (actually almost every device will need this); allow it to talk to the local nurses’ call station(s) (via an IP address(es) or DNS address, depending on the topology); allow it to use the HL7 protocol for outbound communications only (Layer 7); everything else should be denied – it can’t talk to anything else or receive communications from anything else.
What’s Going to be Easy
The easier part of developing policies centers around the fact that cyber physical systems are deterministic. These devices almost always act the same way. A patient monitor will always send data on a certain port to a certain place at a certain time, and it will use the same protocols and speak to the same hosts (e.g., external control systems, nurses’ stations). In essence, the way these devices work and behave is predictable.
This means once you have visibility (phase 1) into what the device is, you can probably determine what it should be doing (phase 2). That could make engineering a process and creating a policy/rule to set boundaries and constrain its activity should be fairly easy. Of course, we know that nothing is as easy as it should be…
What’s Going to be More Difficult
We can’t say it enough: clinical settings are anything but simple or static. The hard part of developing effective CZT policies is accounting for the continuous changes in the environment and understanding all the complexity and inter-dependencies of all the people and devices required by different care and business protocols.
For the devices that never move, it’s fairly easy to scope a policy that can protect all the operational processes that device is involved in. For example, MRIs typically don’t move, so it’s much easier to determine the scope of the communications and interactions that need to be allowed. Devices that move all over the place, however, like a patient monitor, are going to be much more difficult to craft a policy for.
This gets us back to mapping (phase 2). If you have mapped your environment well, and effectively documented the physical workflows and care protocols and tied them to the cyber flows, you should have all the potential steps, devices, and dependencies you need to consider. This allows you to create a policy, like the example we described above for the IntelliVue monitor.
When that IntelliVue monitor is moved to another floor, however, the policy will likely need to change. For starters it will probably need to talk to different DNS servers and different control stations/nurses’ stations. It may also need to be applied by different enforcement points, which can create additional complications.
In your environment you probably have a mix of switches, NACs and firewalls you are using as control points. Each enforces policies that are written differently, and each may have a different control structure. If you have to write policies based on the lowest common denominator for all your control points, you will lose all the impact of your CZT policy. For example, you don’t want to have to write a policy that operates at Layer 4 - allow port 2575, TCP, UDP. You want to be precise, which means operating at layer 7, where you can specify the allowed protocols (HL7).
This means you need a way to accommodate device movement and apply policy accordingly, so it can change as you are running down the hall if need be. One way to accomplish is to tie policies to MAC addresses to ensure your controls can recognize the device and apply the right policy. Often you need a solution, like Medigate, that can fill out the missing pieces that control points have on devices, so you aren’t forced to apply generic, homogenized policies that do little to mitigate risk.
The primary recommendation for this phase is to investigate your control points before you enforce policies to understand the limitations of the solutions you have or are looking to buy. You can look at:
Watch out for upcoming blog on phases 4 and 5. For more general information, you can check out Medigate’s white paper on Clinical Zero Trust.
Defining Clinical Zero Trust: What you need to be thinking about when implementing a zero trust strategy for healthcare
Originally published by Cloud Harmonics www.cloudharmonics.com. Reposted with permission.
My last blog looked at the complex, dynamic cybersecurity landscape that makes it very difficult for someone to step into a cybersecurity role and succeed. If we are to truly start to address the cybersecurity skills gap, we need to make it easier for someone to see, understand and shut down attacks – this requires a combination of technologies, services and experiential/educational components:
More than half of respondents (55%) to a survey by Intel Security “believe cyber-security technologies will evolve to help close the skills gap within five years.” Likely this will come in the form of advances in more autonomous cybersecurity. The US Department of Homeland Services painted a picture of what this might look like, back in 2011, in the paper, “Enabling Distributed Security in Cyberspace.” They described an ecosystem where “cyber participants, including cyber devices are able to work together in near-real time to anticipate and prevent cyberattacks, limit the spread of attacks across participating devices, minimize the consequences of attacks, and recover to a trusted state.”
This is in contrast to the typical cybersecurity landscape today – in which an organization has a host of different cybersecurity technologies to try to protect all their different users, systems/devices and workflows, many of which they are blind to (e.g. cloud applications) or have no control over (e.g. personal devices). Each device requires a cyber analyst to not only deploy and manage it, but also interpret the information it produces and try to link it to other data to make sense of what is happening. Often analysts are silo’d off, responsible for protecting one part of the network or managing one type of solution, making it hard to get access to everything they need to see the bigger, complete picture. Automation and orchestration can help bring all this information together to start to alleviate these problems.
As autonomous cars and drones have grown in popularity, so have more autonomous security measures, which are better able to keep pace with the automation being employed by hackers to launch their attacks. We have seen vendors increasingly leverage artificial intelligence (AI), machine learning, orchestration and automation in an effort to accelerate an organization’s ability to identify and respond to changing cybersecurity needs. These measures can dramatically simplify the deployment and ongoing management of the security infrastructure, particularly for those elements that are manually-intensive or lend themselves to ‘black and white’ decisions (e.g. when entities or events can be easily incriminated or exonerated).
For example, a large organization can average close to 17,000 alerts a week, and only one in five alerts ends up being something. Investigating each and every alert isn’t practical or an effective use of resources, but having a solution (e.g. incident response/analytics) that can automate investigations to enable analysts to quickly understand what’s going on and prioritize their activities is sustainable. Hence, we have seen an explosion in the IR automation market – the Enterprise Strategy Group found that 56% of enterprise organizations “are already taking action to automate and orchestrate incident response processes;” Technavio has the IR system market growing at a compound annual growth rate (CAGR) of 13%.
Other cybersecurity market segments and vendors are recognizing the need for automation/orchestration/machine learning/AI to address the skills gap. Palo Alto Networks latest release (8.0) of their platform had a number of capabilities that improve the efficiency and coordination between the cybersecurity infrastructure (see our blog, xxxxx). Our colleagues at SecureDynamics have told us of they’ve experienced an uptick in demand for their Rule Migration tool, which automates the translation of legacy firewall policies to next-generation application-based rule sets. There are also open source projects, such as MineMeld, that show us how organizations can potentially use external threat feeds to support self-configuring security policies.
To truly ease the burden on cybersecurity analysts and improve the efficiency and productivity of the cybersecurity infrastructure, we need more of these kinds of innovations and automations.
The reality is there are always times when organizations, even those with SOCs that are skilled and staffed appropriately, may need a little help. This is where services come in; we are finding there is greater acceptance that augmenting resources with a service offering can be a good way to enhance the effectiveness of an organization’s cybersecurity strategy and implementation. An outsider’s view can give organizations the knowledge they need, a fresh perspective or a new way of thinking that helps drive better decision-making and ultimately better security.
The problem is managed security services providers (MSSP) are having to staff up themselves to meet the demand. Research and Markets predicted the MSSP sector will reach $31.9 billion by 2019, with a CAGR of 17.3% - this may be low if you consider a new report by MarketsandMarkets puts the incident response services market, one of the segments within the overall MSSP market, at 30.29 billion by 2021, with a CAGR of 18.3%.
To address the demand and protect against the ever-expanding threat landscape, these MSSPs have to build (or acquire) the talent – which is why we’ve seen some a lot movement in this space (e.g. FireEye’s acquisition of Mandiant, IBM’s acquisition of Lighthouse Security Group LLC, and BAE System’s acquisition of SilverSky, etc.). Ultimately, being able to deliver the experience and know-how organizations need, we are back to the cybersecurity skills gap.
Nothing replaces the knowledge and expertise of a security analyst, in terms of being able to identify, contain and fully remediate an incident. Unfortunately, as we’ve already mentioned, these folks are in short supply, so organizations need to develop this in-house talent themselves. 73% of organizations in a SANS survey indicated “they intend to plan training and staff certifications in the next 12 months.”
But what kind of training do they need to do and what kinds of skills do they need to build? Due to the aforementioned breadth of threats, threat actors, systems/devices and workflows that could be involved in a cyber incident, it’s hard to create a concrete list of things to do or know. One such attempt might focus on the layer in which they are trying to secure – e.g. network, endpoint, application, server, data, cloud, etc.; while another might look at more general areas – e.g. intrusion detection, secure software development, risk mitigation, forensics, compliance, monitoring, identity management, etc. The reality is an organization needs to cover all these bases.
This is probably why half the companies in the “Hacking the Skills Shortage” study said they would like to see a bachelor’s degree in a relevant technical area. This gives analysts a general background that can be built upon to develop the deeper, relevant knowledge needed to better protect an organization’s specific environment.
The most effective skill building comes from real-world experience. I’m reminded of the Benjamin Franklin quote “Tell me and I forget, teach me and I may remember, involve me and I learn.” We have seen higher education institutions re-thinking the way they are structuring their learning to be much more hands on and interactive. Jelena Kovacevic, head of the electrical and computer engineering department at Carnegie Mellon University, explained to U.S. News, "At the center of meeting today's challenges is an age-old idea: Learn by making, doing and experimenting. We can do this by imbuing real-world problems into our curricula through projects, internships and collaboration with companies."
Not only seeing, but doing hacks firsthand is one of the best ways for individuals to start to identify, understand, and ultimately stop them. As a result, 68% of the respondents said hacking competitions are a good way for individuals to develop critical cybersecurity skills.
We, at Cloud Harmonics, have seen the difference that doing versus hearing or watching has on a person’s understanding. We developed our proprietary learning environment, Orchestra, to give attendees (we train more than 4000 users ever year) the opportunity to not only interact with the instructors who are leading the sessions, but also the solutions themselves. Our virtual sandbox (vSandbox) and Ultimate Test Drive (UTD) days give attendees real-world experience with solutions, in a way that enables them to see firsthand how they could deploy, use and benefit from their capabilities in their own environment.
Because there is really no substitute for experiential learning, we expect to see more users signing up to test and work with solutions in a safe environment to speed their deployment and use of advanced features in their own organization. Ultimately, to address the cybersecurity gap, it will take a confluence of technologies, services and experiential learning to build the skills and capabilities organizations need to keep up (and ideally get ahead) of all the threats targeting their organization.
Originally published by Cloud Harmonics - www.cloudharmonics.com, and reposted with permission.
Reflecting on the time I recently spent with some of our sales engineers, I was reminded that one of the biggest issues faced by most of the end-user organizations we work with (through our value added reseller (VAR) partners) is a lack of cybersecurity expertise. Organizations simply can’t recruit or retain all the talent they need to mount an effective defense against all the different threats they are facing.
We’ve all seen the stats – 82% of IT professionals report a lack of cybersecurity skills within their organization; more than 30% of cybersecurity openings in the U.S. go unfilled every year; by 2019, there will be one to two million jobs unfilled in the global cybersecurity workforce.
So, why aren’t more people flocking to cybersecurity? Particularly when cybersecurity professionals are being heralded as one of the job market’s hottest commodities, in a cybersecurity market that experts predict will grow to $170 billion by 2020? I think, to state the obvious, it’s because cybersecurity is hard, and only getting harder.
Cybersecurity experts have to stay on top of all the new threats facing their organization. That’s no small task, considering:
Cybersecurity experts also have to stay on top of the ever-growing number of highly skilled hackers targeting their organization, all of whom have different, yet extremely persistent motivations, such as:
In addition, cybersecurity experts have to try to identify and shut down all the different vulnerabilities (and ways attackers can get “in”) throughout their organization. The universe of attack vectors is exploding, as organizations increasingly rely on:
Cybersecurity experts have to deploy, manage and maintain a range of different cybersecurity technologies to try to protect against all the threats and attackers targeting their organization. They need to monitor, identify and shut down the attack’s ability to exploit all the different attack vectors that potentially exist.
As with everything in cybersecurity, determining what needs to be implemented to defend the ongoing operations of their business and the integrity and privacy of their critical assets is anything but simple. There were almost 600 vendors exhibiting at this year’s RSA and close to 250 startups doing things in and around the event. Almost all have marketing messages that make seemingly indistinguishable claims, offering overlapping capabilities that make the marketplace complex and confusing.
It’s hard for even seasoned cybersecurity professionals to navigate, so how do we expect someone entering the field to get up to speed on everything? How do we expect them to be able to identify all the different vulnerabilities, threats and actors they could come up against? How do we expect them to learn how to use all these different systems and figure out what to do?
The simple answer is we can’t expect them to do these things until we show them how to do them. If we are to address the cybersecurity shortage and recruit and retain vital cybersecurity personnel, we are going to have to change our expectations and adjust our approach. If we don’t, the cybersecurity skills gap is only going to get wider. For my thoughts on what these expectations should look like and what the approach should be to develop new talent to start to better address the skills shortage, check out part 2 of this blog series - "What Do We Need to Do to Address the Cybersecurity Expertise Shortage".
Interest and momentum around OpenFlow and software defined networking (SDN) has certainly been accelerating. I think people are so excited about SDNs because, while we have seen a lot of innovation around networking – in the wireless space, the data center, and all the applications – there has been very little innovation in networking – the routers and switches – within the last decade. The prospect of completely re-architecting the network, by separating the control plane from the data plane, opens up a lot of new possibilities.
With SDNs, organizations aren’t constrained by how the network is built. They are free to build a dynamic, fluid infrastructure that can support fluctuating demands, shorter implementation cycles (check out Stanford’s Mininet), and completely new business models. But, as I have mentioned before, we are just at the beginning. While those of us watching this space have been impressed by the rapid pace of innovation within SDNs to date, it’s hard to predict what’s going to happen next. But that won’t stop us from trying!
I spent the last few weeks checking in with some SDN pioneers to find out what’s going on that’s of interest in the SDN space these days. Among those experts whom I spoke with were Chris Small (CS), Network Researcher at Indiana University, Phil Porras (PP), Program Director at the Computer Science Lab of SRI, and Dan Talayco (DT), Member of the Technical Staff at Big Switch Networks. The following are some excerpts from my discussions:
What are the top projects in your mind going on right now around OpenFlow and SDNs?
DT: “It’s hard for me to choose just a couple to talk about. Which is a great thing, isn’t it? There are three very different parts of the ecosystem in SDN. First, there are the switches providing the infrastructure that moves packets. Then there are controllers. This is a layer of centralized software controlling the forwarding behavior of the infrastructure (most often through the OpenFlow protocol) and providing a platform for the third layer, which is all the SDN Applications. These are software programs that run on controllers. They are given visibility into the topology of the network and are notified of events in the network to which they respond.
Here are four open source SDN projects I’d point to. I’m more familiar with the lower two layers (switches and controllers), so mine are from there:
Floodlight is an open source controller in Java. It was introduced less than a year ago I believe, but has been getting rapid acceptance in the OpenFlow community. Currently it has more public forum discussion traffic than all other controllers combined.
Open vSwitch (OvS) is a multi-layer virtual switch released under the open source Apache 2.0 license. Its focus is primarily as a virtual switch, though it has been ported to various hardware platforms as well. Some of the originators of OpenFlow created OvS.
OFTest was developed at Stanford. It’s a framework and set of tests implemented in Python that give people a way to validate the functionality of their OpenFlow switches. There was even a simple software switch written in Python to validate OpenFlow version 1.1 that is distributed with OFTest.
Indigo is a project, also started at Stanford, providing an implementation of OpenFlow on hardware switches. It runs on several hardware platforms and has been used in a number of different environments. This project is currently being updated to describe a generic architecture for OpenFlow switches targeting hardware forwarding.”
CS: “While the work that’s being done with the Controllers is very important, I think the most interesting pieces to look at are the actual applications. These help us make sense of what’s possible. The first one that I think is interesting is one we are doing at Indiana University. We have an OpenFlow load-balancer in FlowScale. We have deployed it out in our campus network, in front of our IDS systems, and are taking all of our traffic through it (48 port by 10Gig switch). It does all the routing, fail over, etc. you would want a load balancer to do, but cheaper than an off-the-shelf solution.
The other key project I would look at is the work that CPqD is doing. They are basically a Brazilian Bell Labs, and they are working on RouteFlow to run a virtual topology with Open Source software and then replicates the virtual topology into the OpenFlow switches. This is how you can take a top-of-rack switch and convert it into a very capable router and integrate a lot of different capabilities needed for research, campus and enterprise deployments.”
PP: “I’ve been looking at this space with respect to security and think there are a few core strategies that researchers are exploring to see how best to develop security technology that can dynamically respond to either threats in the network or changes in the OpenFlow stack. The idea is to monitor threats and then have the security technologies interact with the security controllers to apply new, dynamic mediation policies.
There is FlowVisor, led by Ali Al-Shabibi out of Stanford and Rob Sherwood (who used to be at Stanford, but is now at Big Switch), which works to secure network operations by segmenting, or slicing, the network control into independent virtual machines. Each network slice (or domain) is governed by a self-contained application, architected to not interfere with the applications that govern other network slices. Most recently, they started considering whether the hypervisor layer could also be a compelling layer in which to integrate enterprise- or data center-wide policy enforcement.
We [at SRI] have been working on FortNOX, which is an effort to extend the OpenFlow security controller to become a security mediation service – one that can apply strong policy in a network slice to ensure there is compliance with a fixed policy. It’s capable of instantiating a hierarchical trust model that includes network operations, security applications, and traditional OpenFlow applications. The controller reconciles all new flow rules against the existing set of rules and, if there’s a conflict, the controller, using digital signatures to authenticate the rule source, resolves it based on which author has highest authority.
CloudPolice, led by Ion Stoica from U.C. Berkeley in concert with folks from Princeton and Intel Labs Berkeley, are trying to use OpenFlow as a way to provide very customized security policy control for virtual OSs within the host. Here, the responsibility for network security is moved away from the network infrastructure and placed into the hypvervisor of the host to mediate the flows with custom policies per VM stack.
The University of Maryland, along with Georgia Tech, the National University of Sciences and Technology (Pakistan) are working on employing OpenFlow as a delivery mechanism for security logic to more efficiently distribute security applications to last hop network infrastructure. The premise is that an ISP or professional security group charged with managing network security could deploy OpenFlow applications into home routers, which is where most of the malware infections take place, to provide individual protection and better summary data up to the ISP layer (or other enforcement point) to produce both higher fidelity threat detection and highly targeted threat responses.”
Why are these projects important?
DT: “Because controllers are the pivot between switching and SDN applications, it’s a really important part of the system to develop right now. This is why I think Floodlight is so important. It’s been exciting to see the growing public contributions to the basic functionality and interfaces that were originally defined. I think a full web interface was recently added.
What’s important is changing, though, because of new projects and the rapidly growing eco system we are seeing. For instance, OFTest has started to get more attention again, partly because we’ve been adding lots of tests to it and partly because the broader ONF test group has been developing a formal test specification.
OpenFlow on hardware is still interesting to me because I think being able to control and manage the forwarding infrastructure via SDN will be important for the foreseeable future and maybe forever. This is why I continue to be active in Indigo.”
CS: “FlowScale is a proof point of the flexibility of OpenFlow and its potential to enable innovation. If you have an application that you want to deploy out, you don’t have to wait for vendor implementations, don’t have to wait to get hardware that’s capable, you can take existing hardware and a little bit of software and implement it very quickly. For example, we have been working with other researchers who are interested in new multi-cast algorithms or PGP implementation, instead of having to wait for major vendors to decide it’s okay to put in their hardware, we can very inexpensively implement it, try it, at line rate, and then deploy it more widely.
It’s a little like the stuff that ONRC, the collaboration between Stanford and Berkeley, have been working on the past years. They are doing a lot of proof of concept applications with OpenFlow and continue to push new ideas out. They are taking new research and building implementations that can be used in the future for new products. These applications are further out, but it gives you ideas around what can maybe be expanded on and made into new products. They have worked on a number of research projects – such as Load Balancing as a network primitive (which we incorporated into FlowScale) and their recent Header Space Analysis which can verify the correctness of the network to ensure the policy of the network match its actual physical deployment.
Routeflow is important because it proves you can remove the complexity from the hardware and get the same capabilities; it puts all the features and complexity in the PCs rather than the switches. We have been working with them on a demonstration of it at the Internet2 Joint Techs Conference, where we are going to show RouteFlow operating in hardware switches as a virtualized service deployed on the Internet2 network. This is the first time we have seen anything like this on a national backbone network.”
PP: “The security projects represent two branches of emphasis: one focused on using SDNs for more flexible integration of dynamic network security policies and the other for better diagnosis and mitigation. One branch is exploring how and where dynamic network security can be implemented in the OpenFlow network stack: the controller (control plane), the network hypervisor (flowvisor), or even the OS hypervisor. The other branch is attempting to demonstrate security applications that are either written as OpenFlow applications for more efficient distribution or are tuned to interact with the OpenFlow controller to conduct dynamic threat mitigation.”
What are some of the hurdles?
DT: “The rapid change in the OpenFlow protocol specification has been a challenge we’ve all faced. It’s probably a symptom of the desire to drive change into these projects as quickly as possible. OvS, for instance, has not been updated since 1.0, though it has a number of its own extensions.
The second challenge faced by those working on open source, especially at the protocol level, is that there are often conflicting requirements between generating code which can be a reference to aid in understanding, versus code which can provide a basis for developing production quality software.
The Indigo project has suffered from two other things: first are the high expectations that it should provide a complete managed switch implementation, which normally involves a large company to implement and support, and second because there is still a significant component that’s only released as a binary. I think as the community goes forward, we are going to see additional work that’s going to make it a lot easier to use all these tools and products in many environments.”
CS: “Right now OpenFlow projects on hardware switches are still immature. It’s important to recognize it’s a different technology, with different limitations and there are some things that are simply not possible right now. But if you don’t need that complete list of features, then it may make perfect sense to use some of these applications. Looking at the space, it’s easy to recognize that things are moving a long quite rapidly, with new vendors, specifications, hardware support, etc. every day, so things will catch up and we can implement many things that are not possible right now.”
PP: “The entire concept of SDN appears to be antithetical to our traditional notions of secure network operations. The fundamentals of security state that at any moment in time you know what’s being enforced. This requires a well-defined security policy instantiated specifically for the target network topology, that can be vetted, tested and audited for compliance.
Software defined networks, on the other hand, embrace the notion that you can continually redefine your security policy. They embrace the notion that policies can be recomputed or derived just in time, by dynamically inserting and removing rules, as network flows or the topology changes. The trick is in reconciling these two seemingly divergent notions.
In addition, OpenFlow applications may compete, contradict, override one another, incorporate vulnerabilities, or even be written by adversaries. The possibility of multiple, custom and 3rd-party OpenFlow applications running on a network controller device introduces a unique policy enforcement challenge – what happens when different applications insert different control policies dynamically? How does the controller guarantee they are not in conflict with each other? How does it vet and decide which policy to enforce? These are all questions that need to be answered in one way or another.
I think it’s best to have these conversations about how we envision securing OpenFlow and empowering new security applications now. Security has had a reputation of being that last to arrive at the party. I think this is a case where we could assist in making a big positive impact on a technology that could, in turn, provide a big positive impact back to security.”
What Does the Future Look Like for Open Source and SDNs?
DT: “I think we are going to see new architectures and reference implementations that will accelerate the deployment of SDNs in the very near future. People are often dismissive of ‘one-off’ projects, but the reality is that we face a host of problems; each of which requires a slightly different solution, while all of them can be addressed by SDN approaches. These projects are already coming out of the wood work as more people better understand SDN. I’ve heard a few people start to say ‘the long tail is the killer app for SDN.’”
CS: “I believe there will be bottoms up adoption, where more and more applications are implemented until there is critical mass and it makes more sense, from a time and cost perspective, to not have to manage two different networks – traditional and SDN-based. When that happens I think we will see a switch to SDNs.”
PP: “OpenFlow has some very exciting potential to drive new innovations in intelligent and dynamic network security defenses for future networks. Long term, I think OpenFlow could prove to be one of the more impactful technologies to drive a variety new solutions in network security. I can envision a future in which a secure OpenFlow network:
My last blog focused on some general guidelines to protect our children online, here are some quick, concrete tips to keep them safe:
-- Make sure usernames/screen names/email addresses do not have any personally identifiable information
Stay away from initials, birthdates, hobbies, towns, graduation year, etc.
The smallest piece of identifiable information could lead a predator to you - remember they are highly motivated
--Don't link screen names to email addresses - if a child gets an email they tend to think it is okay, it's not. Reiterate that if they don't actually know the person, they are a stranger, regardless of how they contact them.
--Set up their buddy/friends list and regularly update and check them to ensure your kids are only interacting with people they actually know; this goes for their phone too.
--Don't post personal information - don't respond to requests from people OR companies
eMarketer found that 75% of children are willing to share personal information online about themselves and their family in exchange for goods and services
--Keep the computer in a public part of the house
--Consider limiting the amount of time they can spend on their phone, iPod, iPad, computer, etc. to whatever you deem as reasonable.
--Regularly check their online surfing history - know exactly where they are going and talk to them about it, so they know you know.
--Use filtering software to prevent access from things you know are bad. Note: only 1/3 of households are using blocking or filtering software.
--Protect your computing resources
Use parental controls - check out Norton's family plan as an example of tools you can consider installing
Here's a list on security technologies (protection from viruses, bots, Trojans and other malware) you might want to consider
Note be sure to use software from a reputable source, otherwise you may be unwittingly downloading malware that can do more harm than good
Make sure it offers a wide range of protection - different attacks use different methods to infiltrate your computer and you want full coverage
--Follow good rules of thumb
Don't open anything (emails or attachments) from anyone you don't know
Don't open anything that looks a little too good to be true - it probably is
Make sure your email doesn't automatically open emails - check your settings
Kids will be kids; they will be curious, test boundaries, and do things that show less than stellar judgment. As parents, we try to guide, support and love them to keep them safe and on a productive path. Inevitably, our efforts collide- you've all seen the tween/teen TV dramas - the problem is in this digital age the opportunities for unhappy outcomes have grown.
This just means we have to redouble our efforts; we need to connect with our kids and give them the tools they need to navigate and stay safe both in the physical world and online one. From day one, we teach our kids to look both ways before crossing the street, to never take anything or go anywhere with strangers, to walk away from a fight, to speak up when someone is not being nice, to say no to drugs, etc. We need to also teach our kids to do the same things when they go online.
Sarah Sorensen is the author of The Sustainable Network: The Accidental Answer for a Troubled Planet. The Sustainable Network demonstrates how we can tackle challenges, ranging from energy conservation to economic and social innovation, using the global network -- of which the public Internet is just one piece. This book demystifies the power of the network, and issues a strong call to action.<br /><br clear="left"></div>We need to remove the idea that stuff online is "not real," or that it doesn't have consequences. We need to drill into them that they will be held accountable for what they do and say when they are online, just as they would be when they are at home or at school. Explain to them that they need to think before they post and they don't have a right to post whatever they want. For example, "sexting" or sending racy photos to your boy/girlfriend is not harmless, even if they are the same age as you; those messages can go everywhere and could be considered child pornography. Cyberbullying is a real problem, with real consequences - threatening someone online is just the same as threatening them on the playground.
Actually the online world opens up new ways for predators or bullys to get at their victims. Unlike the bully on the playground that your child is able to get away from when they go home, the cyberbully is able to follow your child wherever they are. They can send menacing texts to your child's phone, make hurtful comments on their Facebook page, take and post photos of them with their digital cameras, and pop up and threaten them as they interact in digital worlds and games (such as Gaia, Second Life and World of Warcraft).
We need to ensure they protect themselves; that they are aware of their surroundings and understand that they shouldn't trust anyone that they don't physically know. As I mentioned in a past blog, "Protecting Our Children Online", there are three guiding principles that can help kids stay safe:
1. Don't share any personal information
2. Remember that everyone is a stranger
3. Know there is no such thing as private
But, let's face it, even the best kids (and adults) make mistakes. It's inevitable. They get curious or drop their guard, or do something without thinking through all the consequences.
By the way there is new research that provides some insight to the question that most of us parents have asked, "what were you thinking?" - it turns out that children's brains (until their mid-20s) may not be as adept at thinking through the consequences of their actions because their brains process information differently than adults. (hmmm, what's my excuse?)
At these times, it's good to remember why kids go online in the first place. It may be they are looking to figure something out, want to fit in or belong, hope to be popular, or want to escape reality. The best thing we, as parents, can do is understand why our children are going online - are they researching for school, playing video games, chatting with their friends, exploring, etc.? We need to talk to them, get involved and know exactly what they are doing, so we can monitor their behavior and identify changes that might indicate something is wrong.
And sometimes, they find themselves in situations that they didn't intend to get into and are uncertain how to extract themselves from. At these times, we hope they turn to us, their parents, for help, so we can work through the problem together. However, they are often afraid to come to us because they:
1. Don't want to be restricted from using the computer - which may be their social lifeline
2. May not want to expose the offender (typically in cases of abuse, the victim has formed a relationship with the abuser, who has invested the time to gain their trust and be their "friend" - for a child, the average predator will talk to them for 4 to 6 months before approaching them for more)
3. Believe the threats of the offender that something bad will happen to them or their family if they tell
4. May fear punishment for their own bad behavior or participation the activity
5. Are embarrassed that they fell for the scam or were used in this way
Understanding why they may not approach a parent is important, so you can try to address these fears head on. Again, there is no substitution for ongoing communication; but research shows that only 15% of parents are "in the know" about their kids' social networking habits, and how these behaviors can lead to cyberbullying. So, talk to your kids about the dangers and look for changes in their behavior. Have they suddenly lost all interest in going online? Do they shun their phone after getting a few texts? Are they irritable or demonstrating big mood swings?
Offer them a safe environment where they participate in online activities. Make sure they know you are paying attention to what they are doing while online, and ensure they know they can confide in you and ask for your help the second something feels strange or uncomfortable. Apply the same good parenting skills and tactics that you would use in the physcial world to your child's activities in the online world to help keep them safe. And just as generations past, we should strive to ensure they have the tools they need to go out on their own and navigate the world; it's just that the world is a lot more connected now, presenting our children with both greater risks and possibilities.
On April 6th, a federal appeals court ruled that the F.C.C. did not have the authority to regulate how Internet service providers manage their network. At issue was Comcast's right to slow customer's access to the bandwidth intensive, file-sharing service BitTorrent. While they can now limit traffic that is overloading the network, Comcast was careful to say that it had changed its management policies and had no intention of doing so.
These comments were most likely to ease the minds of those who recognize the affect that this court ruling has on the F.C.C.'s authority to mandate "net neutrality." Advocates of net neutrality worry that this decision is going to give providers free reign to control what a user can and cannot access on the network.
It is this point that many of the media outlets focused on, turning this case into a potential watershed moment for watchdogs looking for unfair and biased treatment of traffic by Internet service providers. A single instance of seemingly preferential treatment of one type of content over another could end up causing a provider to lose the trust of their customers. It could also be reason enough for Congress to step in and explicitly grant the F.C.C. the authority to regulate.
As such, it is more important than ever for Internet service providers to be transparent in their actions to sustain customer loyalty. They need to make sure customers know how they plan to manage their networks and what to expect in order to build trust and a lasting relationship. Given that the national focus is on increasing Americans' access to high-speed Internet networks, anything seen to be contrary to achieving that goal, regardless of whether it is real or simply perceived, will have very negative connotations on the brand of that provider.
This is probably why Comcast's statement around the verdict was subdued and focused on the future: "Comcast remains committed to the F.C.C.'s existing open Internet principles, and we will continue to work constructively with this F.C.C. as it determines how best to increase broadband adoption and preserve an open and vibrant Internet."
Providers who want to allay customer fear and skepticism around their motives should make an extra effort to reaffirm their commitment to providing high-speed access and high-quality services. They should start to have an authentic, ongoing dialogue (that is threaded through everything from their Web and social media communications to policies and procedures) that explains the challenges associated with supporting all the different demands of high-bandwidth applications and exactly what they are doing or are going to do to meet these challenges. Only if customers trust that they are providing an equal opportunity service will providers be able to sustain their business without a lot of regulation.