SORENSEN CONSULTING
  • Home
  • Services
  • About
  • Contact
  • Sorensen Blog

What Do We need To Do To Address The Cybersecurity Expertise Shortage

1/26/2018

0 Comments

 
Originally published by Cloud Harmonics www.cloudharmonics.com. Reposted  with permission.

My last blog looked at the complex, dynamic cybersecurity landscape that makes it very difficult for someone to step into a cybersecurity role and succeed.  If we are to truly start to address the cybersecurity skills gap, we need to make it easier for someone to see, understand and shut down attacks – this requires a combination of technologies, services and experiential/educational components:

Technologies
More than half of respondents (55%) to a survey by Intel Security “believe cyber-security technologies will evolve to help close the skills gap within five years.” Likely this will come in the form of advances in more autonomous cybersecurity. The US Department of Homeland Services painted a picture of what this might look like, back in 2011, in the paper, “Enabling Distributed Security in Cyberspace.” They described an ecosystem where “cyber participants, including cyber devices are able to work together in near-real time to anticipate and prevent cyberattacks, limit the spread of attacks across participating devices, minimize the consequences of attacks, and recover to a trusted state.” 

This is in contrast to the typical cybersecurity landscape today – in which an organization has a host of different cybersecurity technologies to try to protect all their different users, systems/devices and workflows, many of which they are blind to (e.g. cloud applications) or have no control over (e.g. personal devices).  Each device requires a cyber analyst to not only deploy and manage it, but also interpret the information it produces and try to link it to other data to make sense of what is happening. Often analysts are silo’d off, responsible for protecting one part of the network or managing one type of solution, making it hard to get access to everything they need to see the bigger, complete picture. Automation and orchestration can help bring all this information together to start to alleviate these problems. 

As autonomous cars and drones have grown in popularity, so have more autonomous security measures, which are better able to keep pace with the automation being employed by hackers to launch their attacks. We have seen vendors increasingly leverage artificial intelligence (AI), machine learning, orchestration and automation in an effort to accelerate an organization’s ability to identify and respond to changing cybersecurity needs. These measures can dramatically simplify the deployment and ongoing management of the security infrastructure, particularly for those elements that are manually-intensive or lend themselves to ‘black and white’ decisions (e.g. when entities or events can be easily incriminated or exonerated). 

For example, a large organization can average close to 17,000 alerts a week, and only one in five alerts ends up being something. Investigating each and every alert isn’t practical or an effective use of resources, but having a solution (e.g. incident response/analytics) that can automate investigations to enable analysts to quickly understand what’s going on and prioritize their activities is sustainable. Hence, we have seen an explosion in the IR automation market – the Enterprise Strategy Group found that 56% of enterprise organizations “are already taking action to automate and orchestrate incident response processes;” Technavio has the IR system market growing at a compound annual growth rate (CAGR) of 13%. 

Other cybersecurity market segments and vendors are recognizing the need for automation/orchestration/machine learning/AI to address the skills gap. Palo Alto Networks latest release (8.0) of their platform had a number of capabilities that improve the efficiency and coordination between the cybersecurity infrastructure (see our blog, xxxxx). Our colleagues at SecureDynamics have told us of they’ve experienced an uptick in demand for their Rule Migration tool, which automates the translation of legacy firewall policies to next-generation application-based rule sets. There are also open source projects, such as MineMeld, that show us how organizations can potentially use external threat feeds to support self-configuring security policies. 

To truly ease the burden on cybersecurity analysts and improve the efficiency and productivity of the cybersecurity infrastructure, we need more of these kinds of innovations and automations. 

Services
The reality is there are always times when organizations, even those with SOCs that are skilled and staffed appropriately, may need a little help. This is where services come in; we are finding there is greater acceptance that augmenting resources with a service offering can be a good way to enhance the effectiveness of an organization’s cybersecurity strategy and implementation. An outsider’s view can give organizations the knowledge they need, a fresh perspective or a new way of thinking that helps drive better decision-making and ultimately better security.  
The problem is managed security services providers (MSSP) are having to staff up themselves to meet the demand. Research and Markets predicted the MSSP sector will reach $31.9 billion by 2019, with a CAGR of 17.3% - this may be low if you consider a new report by MarketsandMarkets puts the incident response services market, one of the segments within the overall MSSP market, at 30.29 billion by 2021, with a CAGR of 18.3%. 

To address the demand and protect against the ever-expanding threat landscape, these MSSPs have to build (or acquire) the talent – which is why we’ve seen some a lot movement in this space (e.g. FireEye’s acquisition of Mandiant, IBM’s acquisition of Lighthouse Security Group LLC, and BAE System’s acquisition of SilverSky, etc.).  Ultimately, being able to deliver the experience and know-how organizations need, we are back to the cybersecurity skills gap.

Educational Opportunities
Nothing replaces the knowledge and expertise of a security analyst, in terms of being able to identify, contain and fully remediate an incident. Unfortunately, as we’ve already mentioned, these folks are in short supply, so organizations need to develop this in-house talent themselves. 73% of organizations in a SANS survey indicated “they intend to plan training and staff certifications in the next 12 months.” 

But what kind of training do they need to do and what kinds of skills do they need to build? Due to the aforementioned breadth of threats, threat actors, systems/devices and workflows that could be involved in a cyber incident, it’s hard to create a concrete list of things to do or know. One such attempt might focus on the layer in which they are trying to secure – e.g. network, endpoint, application, server, data, cloud, etc.; while another might look at more general areas – e.g. intrusion detection, secure software development, risk mitigation, forensics, compliance, monitoring, identity management, etc.  The reality is an organization needs to cover all these bases. 

This is probably why half the companies in the “Hacking the Skills Shortage” study said they would like to see a bachelor’s degree in a relevant technical area. This gives analysts a general background that can be built upon to develop the deeper, relevant knowledge needed to better protect an organization’s specific environment. 

The most effective skill building comes from real-world experience. I’m reminded of the Benjamin Franklin quote “Tell me and I forget, teach me and I may remember, involve me and I learn.” We have seen higher education institutions re-thinking the way they are structuring their learning to be much more hands on and interactive. Jelena Kovacevic, head of the electrical and computer engineering department at Carnegie Mellon University, explained to U.S. News, "At the center of meeting today's challenges is an age-old idea: Learn by making, doing and experimenting. We can do this by imbuing real-world problems into our curricula through projects, internships and collaboration with companies."
​
Not only seeing, but doing hacks firsthand is one of the best ways for individuals to start to identify, understand, and ultimately stop them. As a result, 68% of the respondents said hacking competitions are a good way for individuals to develop critical cybersecurity skills. 

We, at Cloud Harmonics, have seen the difference that doing versus hearing or watching has on a person’s understanding. We developed our proprietary learning environment, Orchestra, to give attendees (we train more than 4000 users ever year) the opportunity to not only interact with the instructors who are leading the sessions, but also the solutions themselves. Our virtual sandbox (vSandbox) and Ultimate Test Drive (UTD) days give attendees real-world experience with solutions, in a way that enables them to see firsthand how they could deploy, use and benefit from their capabilities in their own environment.

Because there is really no substitute for experiential learning, we expect to see more users signing up to test and work with solutions in a safe environment to speed their deployment and use of advanced features in their own organization. Ultimately, to address the cybersecurity gap, it will take a confluence of technologies, services and experiential learning to build the skills and capabilities organizations need to keep up (and ideally get ahead) of all the threats targeting their organization.
0 Comments

Why Things Have to Change Before We Can Make a Dent in the Cybersecurity Shortage

1/26/2018

0 Comments

 
Originally published by Cloud Harmonics - www.cloudharmonics.com, and reposted with permission.

Reflecting on the time I recently spent with some of our sales engineers, I was reminded that one of the biggest issues faced by most of the end-user organizations we work with (through our value added reseller (VAR) partners) is a lack of cybersecurity expertise. Organizations simply can’t recruit or retain all the talent they need to mount an effective defense against all the different threats they are facing.

We’ve all seen the stats – 82% of IT professionals report a lack of cybersecurity skills within their organization; more than 30% of cybersecurity openings in the U.S. go unfilled every year; by 2019, there will be one to two million jobs unfilled in the global cybersecurity workforce.

So, why aren’t more people flocking to cybersecurity? Particularly when cybersecurity professionals are being heralded as one of the job market’s hottest commodities, in a cybersecurity market that experts predict will grow to $170 billion by 2020? I think, to state the obvious, it’s because cybersecurity is hard, and only getting harder. 

Cybersecurity experts have to stay on top of all the new threats facing their organization. That’s no small task, considering: 
  • A new zero-day vulnerability is discovered every week. 
  • There are an average of 200,000 new malware samples found on a daily basis.
  • More than 4000 ransomware attacks are carried out every day.
  • More than 13,000 Android devices are infected every day by a targeted malware campaign.

Cybersecurity experts also have to stay on top of the ever-growing number of highly skilled hackers targeting their organization, all of whom have different, yet extremely persistent motivations, such as: 
  • Nation states – looking to exert influence, disrupt activities, and gain an advantage (e.g. Russian hackers were implicated in tampering with the U.S.’ 2016 election). 
  • Criminal rings – looking to steal or extort money (e.g. Avalanche group, Dyn DDoS attack). 
  • Internal actors – looking to further their personal agenda or exact revenge (e.g. 78% of breaches originate from within the extended enterprise), or being used as pawns in an attacker’s campaign (spear-phishing targeting employees increased 55% in 2015).

In addition, cybersecurity experts have to try to identify and shut down all the different vulnerabilities (and ways attackers can get “in”) throughout their organization. The universe of attack vectors is exploding, as organizations increasingly rely on:
  • Cloud computing – it’s predicted to grow 18% in 2017 to $246.8 billion in total worldwide revenue; three quarters of web sites were found to have vulnerabilities. 
  • Personal mobile devices – by 2017, the total number of mobile phone users will rise to 4.77 billion.
  •  Internet of Things (IoT) – it’s estimated there will be 22.5 billion IoT devices by 2021; many of these devices can be exploited to launch attacks.   

Cybersecurity experts have to deploy, manage and maintain a range of different cybersecurity technologies to try to protect against all the threats and attackers targeting their organization. They need to monitor, identify and shut down the attack’s ability to exploit all the different attack vectors that potentially exist. 
As with everything in cybersecurity, determining what needs to be implemented to defend the ongoing operations of their business and the integrity and privacy of their critical assets is anything but simple. There were almost 600 vendors exhibiting at this year’s RSA and close to 250 startups doing things in and around the event. Almost all have marketing messages that make seemingly indistinguishable claims, offering overlapping capabilities that make the marketplace complex and confusing. 

It’s hard for even seasoned cybersecurity professionals to navigate, so how do we expect someone entering the field to get up to speed on everything? How do we expect them to be able to identify all the different vulnerabilities, threats and actors they could come up against? How do we expect them to learn how to use all these different systems and figure out what to do? 

The simple answer is we can’t expect them to do these things until we show them how to do them. If we are to address the cybersecurity shortage and recruit and retain vital cybersecurity personnel, we are going to have to change our expectations and adjust our approach. If we don’t, the cybersecurity skills gap is only going to get wider. For my thoughts on what these expectations should look like and what the approach should be to develop new talent to start to better address the skills shortage, check out part 2 of this blog series - "What Do We Need to Do to Address the Cybersecurity Expertise Shortage".
0 Comments

Top Open Source SDN Projects to Keep Your Eyes On

4/1/2013

0 Comments

 
Interest and momentum around OpenFlow and software defined networking (SDN) has certainly been accelerating. I think people are so excited about SDNs because, while we have seen a lot of innovation around networking – in the wireless space, the data center, and all the applications – there has been very little innovation in networking – the routers and switches – within the last decade. The prospect of completely re-architecting the network, by separating the control plane from the data plane, opens up a lot of new possibilities.

With SDNs, organizations aren’t constrained by how the network is built. They are free to build a dynamic, fluid infrastructure that can support fluctuating demands, shorter implementation cycles (check out Stanford’s Mininet), and completely new business models. But, as I have mentioned before, we are just at the beginning. While those of us watching this space have been impressed by the rapid pace of innovation within SDNs to date, it’s hard to predict what’s going to happen next. But that won’t stop us from trying!

I spent the last few weeks checking in with some SDN pioneers to find out what’s going on that’s of interest in the SDN space these days. Among those experts whom I spoke with were Chris Small (CS), Network Researcher at Indiana University, Phil Porras (PP), Program Director at the Computer Science Lab of SRI, and Dan Talayco (DT), Member of the Technical Staff at Big Switch Networks. The following are some excerpts from my discussions:

What are the top projects in your mind going on right now around OpenFlow and SDNs?

DT: “It’s hard for me to choose just a couple to talk about.  Which is a great thing, isn’t it?  There are three very different parts of the ecosystem in SDN.  First, there are the switches providing the infrastructure that moves packets. Then there are controllers. This is a layer of centralized software controlling the forwarding behavior of the infrastructure (most often through the OpenFlow protocol) and providing a platform for the third layer, which is all the SDN Applications. These are software programs that run on controllers. They are given visibility into the topology of the network and are notified of events in the network to which they respond.

Here are four open source SDN projects I’d point to.  I’m more familiar with the lower two layers (switches and controllers), so mine are from there:
Floodlight is an open source controller in Java.  It was introduced less than a year ago I believe, but has been getting rapid acceptance in the OpenFlow community. Currently it has more public forum discussion traffic than all other controllers combined.

Open vSwitch (OvS) is a multi-layer virtual switch released under the open source Apache 2.0 license.  Its focus is primarily as a virtual switch, though it has been ported to various hardware platforms as well.  Some of the originators of OpenFlow created OvS.

OFTest was developed at Stanford.  It’s a framework and set of tests implemented in Python that give people a way to validate the functionality of their OpenFlow switches.  There was even a simple software switch written in Python to validate OpenFlow version 1.1 that is distributed with OFTest.

Indigo is a project, also started at Stanford, providing an implementation of OpenFlow on hardware switches.  It runs on several hardware platforms and has been used in a number of different environments.  This project is currently being updated to describe a generic architecture for OpenFlow switches targeting hardware forwarding.”

CS: “While the work that’s being done with the Controllers is very important, I think the most interesting pieces to look at are the actual applications. These help us make sense of what’s possible. The first one that I think is interesting is one we are doing at Indiana University. We have an OpenFlow load-balancer in FlowScale. We have deployed it out in our campus network, in front of our IDS systems, and are taking all of our traffic through it (48 port by 10Gig switch). It does all the routing, fail over, etc. you would want a load balancer to do, but cheaper than an off-the-shelf solution.

The other key project I would look at is the work that CPqD is doing. They are basically a Brazilian Bell Labs, and they are working on RouteFlow to run a virtual topology with Open Source software and then replicates the virtual topology into the OpenFlow switches. This is how you can take a top-of-rack switch and convert it into a very capable router and integrate a lot of different capabilities needed for research, campus and enterprise deployments.”

PP: “I’ve been looking at this space with respect to security and think there are a few core strategies that researchers are exploring to see how best to develop security technology that can dynamically respond to either threats in the network or changes in the OpenFlow stack. The idea is to monitor threats and then have the security technologies interact with the security controllers to apply new, dynamic mediation policies.

There is FlowVisor, led by Ali Al-Shabibi out of Stanford and Rob Sherwood (who used to be at Stanford, but is now at Big Switch), which works to secure network operations by segmenting, or slicing, the network control into independent virtual machines. Each network slice (or domain) is governed by a self-contained application, architected to not interfere with the applications that govern other network slices. Most recently, they started considering whether the hypervisor layer could also be a compelling layer in which to integrate enterprise- or data center-wide policy enforcement.

We [at SRI] have been working on FortNOX, which is an effort to extend the OpenFlow security controller to become a security mediation service – one that can apply strong policy in a network slice to ensure there is compliance with a fixed policy. It’s capable of instantiating a hierarchical trust model that includes network operations, security applications, and traditional OpenFlow applications. The controller reconciles all new flow rules against the existing set of rules and, if there’s a conflict, the controller, using digital signatures to authenticate the rule source, resolves it based on which author has highest authority.

CloudPolice, led by Ion Stoica from U.C. Berkeley in concert with folks from Princeton and Intel Labs Berkeley, are trying to use OpenFlow as a way to provide very customized security policy control for virtual OSs within the host.  Here, the responsibility for network security is moved away from the network infrastructure and placed into the hypvervisor of the host to mediate the flows with custom policies per VM stack.

The University of Maryland, along with Georgia Tech, the National University of Sciences and Technology (Pakistan) are working on employing OpenFlow as a delivery mechanism for security logic to more efficiently distribute security applications to last hop network infrastructure. The premise is that an ISP or professional security group charged with managing network security could deploy OpenFlow applications into home routers, which is where most of the malware infections take place, to provide individual protection and better summary data up to the ISP layer (or other enforcement point) to produce both higher fidelity threat detection and highly targeted threat responses.”

Why are these projects important? 

DT: “Because controllers are the pivot between switching and SDN applications, it’s a really important part of the system to develop right now.  This is why I think Floodlight is so important.  It’s been exciting to see the growing public contributions to the basic functionality and interfaces that were originally defined.  I think a full web interface was recently added.

What’s important is changing, though, because of new projects and the rapidly growing eco system we are seeing. For instance, OFTest has started to get more attention again, partly because we’ve been adding lots of tests to it and partly because the broader ONF test group has been developing a formal test specification.

OpenFlow on hardware is still interesting to me because I think being able to control and manage the forwarding infrastructure via SDN will be important for the foreseeable future and maybe forever.  This is why I continue to be active in Indigo.”

CS: “FlowScale is a proof point of the flexibility of OpenFlow and its potential to enable innovation. If you have an application that you want to deploy out, you don’t have to wait for vendor implementations, don’t have to wait to get hardware that’s capable, you can take existing hardware and a little bit of software and implement it very quickly. For example, we have been working with other researchers who are interested in new multi-cast algorithms or PGP implementation, instead of having to wait for major vendors to decide it’s okay to put in their hardware, we can very inexpensively implement it, try it, at line rate, and then deploy it more widely.

It’s a little like the stuff that ONRC, the collaboration between Stanford and Berkeley, have been working on the past years. They are doing a lot of proof of concept applications with OpenFlow and continue to push new ideas out. They are taking new research and building implementations that can be used in the future for new products. These applications are further out, but it gives you ideas around what can maybe be expanded on and made into new products. They have worked on a number of research projects – such as Load Balancing as a network primitive (which we incorporated into FlowScale) and their recent Header Space Analysis which can verify the correctness of the network to ensure the policy of the network match its actual physical deployment.

Routeflow is important because it proves you can remove the complexity from the hardware and get the same capabilities; it puts all the features and complexity in the PCs rather than the switches. We have been working with them on a demonstration of it at the Internet2 Joint Techs Conference, where we are going to show RouteFlow operating in hardware switches as a virtualized service deployed on the Internet2 network. This is the first time we have seen anything like this on a national backbone network.”

PP: “The security projects represent two branches of emphasis: one focused on using SDNs for more flexible integration of dynamic network security policies and the other for better diagnosis and mitigation. One branch is exploring how and where dynamic network security can be implemented in the OpenFlow network stack: the controller (control plane), the network hypervisor (flowvisor), or even the OS hypervisor.   The other branch is attempting to demonstrate security applications that are either written as OpenFlow applications for more efficient distribution or are tuned to interact with the OpenFlow controller to conduct dynamic threat mitigation.”

What are some of the hurdles? 

DT: “The rapid change in the OpenFlow protocol specification has been a challenge we’ve all faced.  It’s probably a symptom of the desire to drive change into these projects as quickly as possible.  OvS, for instance, has not been updated since 1.0, though it has a number of its own extensions.

The second challenge faced by those working on open source, especially at the protocol level, is that there are often conflicting requirements between generating code which can be a reference to aid in understanding, versus code which can provide a basis for developing production quality software.

The Indigo project has suffered from two other things: first are the high expectations that it should provide a complete managed switch implementation, which normally involves a large company to implement and support, and second because there is still a significant component that’s only released as a binary. I think as the community goes forward, we are going to see additional work that’s going to make it a lot easier to use all these tools and products in many environments.”

CS: “Right now OpenFlow projects on hardware switches are still immature. It’s important to recognize it’s a different technology, with different limitations and there are some things that are simply not possible right now. But if you don’t need that complete list of features, then it may make perfect sense to use some of these applications. Looking at the space, it’s easy to recognize that things are moving a long quite rapidly, with new vendors, specifications, hardware support, etc. every day, so things will catch up and we can implement many things that are not possible right now.”

PP: “The entire concept of SDN appears to be antithetical to our traditional notions of secure network operations. The fundamentals of security state that at any moment in time you know what’s being enforced. This requires a well-defined security policy instantiated specifically for the target network topology, that can be vetted, tested and audited for compliance.

Software defined networks, on the other hand, embrace the notion that you can continually redefine your security policy.  They embrace the notion that policies can be recomputed or derived just in time, by dynamically inserting and removing rules, as network flows or the topology changes. The trick is in reconciling these two seemingly divergent notions.

In addition, OpenFlow applications may compete, contradict, override one another, incorporate vulnerabilities, or even be written by adversaries. The possibility of multiple, custom and 3rd-party OpenFlow applications running on a network controller device introduces a unique policy enforcement challenge – what happens when different applications insert different control policies dynamically? How does the controller guarantee they are not in conflict with each other? How does it vet and decide which policy to enforce? These are all questions that need to be answered in one way or another.

I think it’s best to have these conversations about how we envision securing OpenFlow and empowering new security applications now. Security has had a reputation of being that last to arrive at the party.  I think this is a case where we could assist in making a big positive impact on a technology that could, in turn, provide a big positive impact back to security.”

What Does the Future Look Like for Open Source and SDNs? 

DT: “I think we are going to see new architectures and reference implementations that will accelerate the deployment of SDNs in the very near future.  People are often dismissive of ‘one-off’ projects, but the reality is that we face a host of problems; each of which requires a slightly different solution, while all of them can be addressed by SDN approaches.  These projects are already coming out of the wood work as more people better understand SDN.  I’ve heard a few people start to say ‘the long tail is the killer app for SDN.’”

CS: “I believe there will be bottoms up adoption, where more and more applications are implemented until there is critical mass and it makes more sense, from a time and cost perspective, to not have to manage two different networks – traditional and SDN-based. When that happens I think we will see a switch to SDNs.”
​
PP: “OpenFlow has some very exciting potential to drive new innovations in intelligent and dynamic network security defenses for future networks.  Long term, I think OpenFlow could prove to be one of the more impactful technologies to drive a variety new solutions in network security.  I can envision a future in which a secure OpenFlow network:
  • incorporates logic at the control or infrastructure layer to mediate all incoming flow rules against an organization’s network security policy in a way that can’t be circumvented and is complete.
  • allows the full dynamism of OpenFlow applications to produce optimal flow routing decisions, while being free to remain unaware of the current security policy and not depended upon to preserve network security. Rather, operators will trust that security enforcement will occur at the control or infrastructure layer.
  • enables InfoSec practitioners to develop future powerful OpenFlow-enabled security applications that can dynamically reprogram flow routing to mitigate threats to the network, remove or quarantine assets that violate security or fail to exhibit runtime integrity, and react to network-wide failure modes.


When we can achieve all three of these, we’ll be able to provide some compelling reasons why OpenFlow has a distinct advantage over existing networking, while instilling the confidence we need to embrace all the other benefits of SDNs. I believe we can reconcile static and dynamic policy enforcement and create all new mitigation services that are much more intelligent and effective countermeasures to better defend our networks.”
0 Comments

Protecting Children Online - Part II: Quick Tips

4/26/2010

0 Comments

 
My last blog focused on some general guidelines to protect our children online, here are some quick, concrete tips to keep them safe:
--  Make sure usernames/screen names/email addresses do not have any personally identifiable information

Stay away from initials, birthdates, hobbies, towns, graduation year, etc.

The smallest piece of identifiable information could lead a predator to you - remember they are highly motivated 

--Don't link screen names to email addresses - if a child gets an email they tend to think it is okay, it's not. Reiterate that if they don't actually know the person, they are a stranger, regardless of how they contact them. 

--Set up their buddy/friends list and regularly update and check them to ensure your kids are only interacting with people they actually know; this goes for their phone too. 

--Don't post personal information - don't respond to requests from people OR companies

eMarketer found that 75% of children are willing to share personal information online about themselves and their family in exchange for goods and services

--Keep the computer in a public part of the house
 
--Consider limiting the amount of time they can spend on their phone, iPod, iPad, computer, etc. to whatever you deem as reasonable. 

--Regularly check their online surfing history - know exactly where they are going and talk to them about it, so they know you know. 

--Use filtering software to prevent access from things you know are bad. Note: only 1/3 of households are using blocking or filtering software.

--Protect your computing resources 

Use parental controls - check out Norton's family plan as an example of tools you can consider installing

Here's a list on security technologies (protection from viruses, bots, Trojans and other malware) you might want to consider 

Note be sure to use software from a reputable source, otherwise you may be unwittingly downloading malware that can do more harm than good

Make sure it offers a wide range of protection - different attacks use different methods to infiltrate your computer and you want full coverage

--Follow good rules of thumb

Don't open anything (emails or attachments) from anyone you don't know

Don't open anything that looks a little too good to be true - it probably is
​
Make sure your email doesn't automatically open emails - check your settings
0 Comments

Protecting Children Online - Part One

4/25/2010

0 Comments

 
Kids will be kids; they will be curious, test boundaries, and do things that show less than stellar judgment. As parents, we try to guide, support and love them to keep them safe and on a productive path. Inevitably, our efforts collide- you've all seen the tween/teen TV dramas - the problem is in this digital age the opportunities for unhappy outcomes have grown. 

This just means we have to redouble our efforts; we need to connect with our kids and give them the tools they need to navigate and stay safe both in the physical world and online one.  From day one, we teach our kids to look both ways before crossing the street, to never take anything or go anywhere with strangers, to walk away from a fight, to speak up when someone is not being nice, to say no to drugs, etc. We need to also teach our kids to do the same things when they go online.
​
Sarah Sorensen is the author of The Sustainable Network: The Accidental Answer for a Troubled Planet. The Sustainable Network demonstrates how we can tackle challenges, ranging from energy conservation to economic and social innovation, using the global network -- of which the public Internet is just one piece. This book demystifies the power of the network, and issues a strong call to action.<br /><br clear="left"></div>We need to remove the idea that stuff online is "not real," or that it doesn't have consequences. We need to drill into them that they will be held accountable for what they do and say when they are online, just as they would be when they are at home or at school. Explain to them that they need to think before they post and they don't have a right to post whatever they want. For example, "sexting" or sending racy photos to your boy/girlfriend is not harmless, even if they are the same age as you; those messages can go everywhere and could be considered child pornography.  Cyberbullying is a real problem, with real consequences - threatening someone online is just the same as threatening them on the playground.

Actually the online world opens up new ways for predators or bullys to get at their victims. Unlike the bully on the playground that your child is able to get away from when they go home, the cyberbully is able to follow your child wherever they are. They can send menacing texts to your child's phone, make hurtful comments on their Facebook page, take and post photos of them with their digital cameras, and pop up and threaten them as they interact in digital worlds and games (such as Gaia, Second Life and World of Warcraft). 

We need to ensure they protect themselves; that they are aware of their surroundings and understand that they shouldn't trust anyone that they don't physically know. As I mentioned in a past blog, "Protecting Our Children Online", there are three guiding principles that can help kids stay safe: 
1. Don't share any personal information
2. Remember that everyone is a stranger
3. Know there is no such thing as private

But, let's face it, even the best kids (and adults) make mistakes. It's inevitable. They get curious or drop their guard, or do something without thinking through all the consequences.   

By the way there is new research that provides some insight to the question that most of us parents have asked, "what were you thinking?" - it turns out that children's brains (until their mid-20s) may not be as adept at thinking through the consequences of their actions because their brains process information differently than adults. (hmmm, what's my excuse?) 

At these times, it's good to remember why kids go online in the first place. It may be they are looking to figure something out, want to fit in or belong, hope to be popular, or want to escape reality.  The best thing we, as parents, can do is understand why our children are going online - are they researching for school, playing video games, chatting with their friends, exploring, etc.?  We need to talk to them, get involved and know exactly what they are doing, so we can monitor their behavior and identify changes that might indicate something is wrong. 

And sometimes, they find themselves in situations that they didn't intend to get into and are uncertain how to extract themselves from.  At these times, we hope they turn to us, their parents, for help, so we can work through the problem together. However, they are often afraid to come to us because they: 
1. Don't want to be restricted from using the computer - which may be their social lifeline 
2. May not want to expose the offender (typically in cases of abuse, the victim has formed a relationship with the abuser, who has invested the time to gain their trust and be their "friend" - for a child, the average predator will talk to them for 4 to 6 months before approaching them for more)
3. Believe the threats of the offender that something bad will happen to them or their family if they tell
4. May fear punishment for their own bad behavior or participation the activity
5. Are embarrassed that they fell for the scam or were used in this way 

Understanding why they may not approach a parent is important, so you can try to address these fears head on.  Again, there is no substitution for ongoing communication; but research shows that only 15% of parents are "in the know" about their kids' social networking habits, and how these behaviors can lead to cyberbullying. So, talk to your kids about the dangers and look for changes in their behavior. Have they suddenly lost all interest in going online? Do they shun their phone after getting a few texts? Are they irritable or demonstrating big mood swings?  
​
Offer them a safe environment where they participate in online activities. Make sure they know you are paying attention to what they are doing while online, and ensure they know they can confide in you and ask for your help the second something feels strange or uncomfortable. Apply the same good parenting skills and tactics that you would use in the physcial world to your child's activities in the online world to help keep them safe.  And just as generations past, we should strive to ensure they have the tools they need to go out on their own and navigate the world; it's just that the world is a lot more connected now, presenting our children with both greater risks and possibilities.
0 Comments

Opinion - How the Role of the F.C.C. Impacts Internet Providers

4/24/2010

0 Comments

 
On April 6th, a federal appeals court ruled that the F.C.C. did not have the authority to regulate how Internet service providers manage their network. At issue was Comcast's right to slow customer's access to the bandwidth intensive, file-sharing service BitTorrent. While they can now limit traffic that is overloading the network, Comcast was careful to say that it had changed its management policies and had no intention of doing so. 

These comments were most likely to ease the minds of those who recognize the affect that this court ruling has on the F.C.C.'s authority to mandate "net neutrality." Advocates of net neutrality worry that this decision is going to give providers free reign to control what a user can and cannot access on the network.  

It is this point that many of the media outlets focused on, turning this case into a potential watershed moment for watchdogs looking for unfair and biased treatment of traffic by Internet service providers.  A single instance of seemingly preferential treatment of one type of content over another could end up causing a provider to lose the trust of their customers. It could also be reason enough for Congress to step in and explicitly grant the F.C.C. the authority to regulate. 

As such, it is more important than ever for Internet service providers to be transparent in their actions to sustain customer loyalty. They need to make sure customers know how they plan to manage their networks and what to expect in order to build trust and a lasting relationship.  Given that the national focus is on increasing Americans' access to high-speed Internet networks, anything seen to be contrary to achieving that goal, regardless of whether it is real or simply perceived, will have very negative connotations on the brand of that provider. 

This is probably why Comcast's statement around the verdict was subdued and focused on the future: "Comcast remains committed to the F.C.C.'s existing open Internet principles, and we will continue to work constructively with this F.C.C. as it determines how best to increase broadband adoption and preserve an open and vibrant Internet."
​
Providers who want to allay customer fear and skepticism around their motives should make an extra effort to reaffirm their commitment to providing high-speed access and high-quality services. They should start to have an authentic, ongoing dialogue (that is threaded through everything from their Web and social media communications to policies and procedures) that explains the challenges associated with supporting all the different demands of high-bandwidth applications and exactly what they are doing or are going to do to meet these challenges.  Only if customers trust that they are providing an equal opportunity service will providers be able to sustain their business without a lot of regulation.
0 Comments

Hard Drives Can Pose Risks to Sustainability

4/23/2010

0 Comments

 
Extending the use of computing devices is critical if we are to create more sustainable consumption. We can divert waste from landfill and reduce the energy it takes to extract materials and build new devices, if we can lengthen the life of the devices we already have or find new ways to use its components.

I think most of us try to recycle our devices and are happy to pass along those that have outgrown our needs. But what if its reuse poses a risk to you?  Hard drives can pose such a risk and, as such, often have their lives and usefulness cut short. 

What do you do with your hard drive, which often houses all of your intellectual property and sensitive information, when you are done with it? How do you make sure your information isn't found and used by someone else? Just deleting the information off of it doesn't mean it's gone, it is not too difficult to get the data back. (Something I am often thankful for when I delete a file by accident, but which opens up a huge risk when you really want to get rid of the information.) Even when your hard disk is corrupted or physically damaged, all is not lost (just do a quick search on hard disk recovery and you will find a whole host of sites and solutions that will help you recover the information).

But wouldn't it be more sustainable if we could extend the life of that device? What if there was a reliable way to permanently erase the data on it without having to shred the device?  Just because the model is no longer of use to you, it is very likely it would suit the needs of someone else. We could divert that device from landfill for a little while longer. Then, because we have a way to erase the data, we could explore recycling and reusing the components to further reduce waste. 

This is something that has been done with cell phones and copiers; they often receive an extended life in the hands of those who find an older model perfectly suitable. (I know I have donated my cell phone in the past; it's easy to <a href="http://charityguide.org/volunteer/fifteen/cell-phone-recycling.htm">search </a>to find organizations in your area who have needs.) But is this safe to do now?

In the past, phones were only used for voice calls - the data potentially exposed consisted of your phone book. Remove your SIM card and you could be fairly sure that future users would not find anything personal left on your phone.  Today's smart phones have the computing power of many desktops; they are being used to conduct our business and personal lives. Ever search the Web? Take a photo? Check your bank account? Pay a bill? Read your email? Download a file? Think of all the data that is potentially on your smart phone stored on the hard drive that now sits on that phone... how do you make sure that it is gone when you are done with the phone? Does this mean we are back to destroying the device? Again, it would be great to know that we can reliably erase the data, so the device can be used by someone else. 

Same thing with photocopiers; over the past five to seven years, most copiers are networked to a variety of computing devices and each have a hard drive that records all the information that is copied, printed, faxed or scanned. Since most organizations don't want to spend the capital to buy a copier they lease it from a provider (which also enables them to offloading the repairs and maintenance). When the lease is up, the copier provider will come, delete the data, and send it off to another customer. But we have already mentioned that simply deleting data doesn't mean it is gone. So these copiers can provide a wealth of information to those who know to look for it. Again, this doesn't make it a sustainable solution. 

So what can you do? As an organization, you 
  • Need to first put in place a proactive data leak prevention program; because only after you are sure you can identify all the potential risks, can you put the processes or technologies in place to mitigate them. 
  • Consider using an enterprise-class disk management program that adheres to any of the eradication standards used by many international governments and military (such as DoD 5220.22, Gutmann method, Schneier Standard, AFSSI 50220, NAVSO P5239-26, VSItR, AR 380-19, GOST P50739-95, Crypto-secure Random Data. 
  • Ensure you can securely delete data from hard drives, including "locked" or "in-use files." 
    • This requires overcoming some operating system limitations that exist to ensure continual operation - which is what you want when you are using the system, but not so great when you want to get rid of the data. 
    • So, make sure you are able to delete all the different file systems from all the different operating systems you have on the device. 
  • You also want to make sure that you can eliminate "zombie-data" stored in the recycle bin or in the blank space of the hard drive. 

For individuals: 
  • You can download software that enables you to erase hard drives, such as Active@KillDisk or LSoft Technologies. They write over the data, because deleting and reformatting the drive doesn't actually delete it. 
    • Note, data that has been written over only one or two times can be recovered; however, it takes expensive equipment to do. So unless you are expecting a super sleuth or crime lab to want to read your data, you are probably safe. 
    • If in fact you are worried about professionals taking the time to get at your data (you probably have bigger problems than I can imagine!), experts recommend rewriting the data seven times to make sure it is unrecoverable. 
    • Make sure you pay attention to those files that are "locked" or "in-use" and "zombie data"- you don't want to leave them on the drive. 
  • Something to think about is the ability to remotely initiate and manage an erasure, so that if your phone or computer is lost, you can delete the data as soon as it connects to the network. 
    • Some operating systems have a "kill pill" feature that allows you to remotely erase and lock it, make sure it's enabled. 

Once the hard drive no longer poses a risk, it can be reused. The goal is to promote a more sustainable way to use technology, so we can reduce our impact and drive change on a global scale.
0 Comments

Online Dangers - Three Principles Every Parent Should Instill

4/22/2010

0 Comments

 
I believe strongly in the potential of the network - heck, I wrote a book about it - however, I also understand the same connections that can be used for good can also be used for bad. And the reality is they can be downright dangerous for our children, who can be bullied, stalked and targeted online. 

How prevalent is it? The statistics are alarming. One in five teenagers in the US have received an unwanted sexual solicitation online acorrding to the Crimes Against Children Research Center Child pornography is one of the fastest growing businesses online. The National Crime Prevention Council suggests that more than half of American teens are exposed to some sort of <a href="http://www.cyberbullyalert.com/blog/2008/08/cyber-bullying-statistics-that-may-shock-you">cyberbullying</a> and the Kids Helpline found as many as 70% were harassed online. 

Unfortunately, these statistics became more personal for me when I learned of a recent incident in our local middle school. And if you are thinking, "Well that's there, it's not happening in our school district," you may want to check with your city's police or even just search your local news; you will find these crimes can and are taking place everywhere. So what can you do? 

As a parent, it's natural to want to remove the threats and simply shut down your children's access to the Internet. But are you really prepared to not only cut off access to their computer, but also their cell phone, digital camera, iTouch, video game consoles (Wii or PlayStation), etc.? Let's face it, we live in a digital age and the network is embedded in almost everything we do; so rather than ban it, we need to teach our children how to use it safely and effectively. 

I think the following three principles are a good start. Every parent should make sure their kids: 
  1. Do not share any personal information - Most obvious is name, age, school, hometown, etc.; less obvious, but no less telling for someone who is paying attention and motivated to figure it out are photos with a school jersey, the name of your local park, the location of your vet, the theater you are going to be at on Friday night, etc.  Don't reveal anything that could enable someone you don't know to figure out who you are and find you. 
  2. Remember that everyone is a stranger - Unless you actually know them, meaning they are a family member, a neighbor, someone you go to school with or know from clubs and extracurricular activities, they are not your "friends," they are strangers. You should not talk to them, take any gifts they may offer, or agree to do anything for them. Unlike the stranger in the mall, where you can at least see them; when you meet someone online you have NO IDEA who they really are. Don't engage.
  3. Know there is no such thing as private -  When you are online, the information you put out there can be found and accessed by almost everyone. This goes for texts, photos, videos, etc. Think before you post anything - is it something you want to see on the front page of a newspaper? If not, don't do it. 

And of course, the most important thing that our children need to know is that they can come to us, no matter what, and we will help them. As in the physical world, there is no substitute for being involved in their lives and that goes for their online activities. Make sure they know you are there and that should anything uncomfortable or threatening arise, you will support them.
0 Comments

F.C.C. Plans Have Potential to Accelerate the Roll Out of the Sustainable Network

4/21/2010

0 Comments

 
Tomorrow, the F.C.C. is putting forth to Congress a 10-year plan focused on developing high-speed Internet access as the dominant communications network. Up for debate includes a recommendation for a subsidy for Internet providers to wire rural parts of the country, an auction of broadcast spectrum for wireless spectrum (the goal is to free up roughly 500 megahertz of spectrum, much of which would come from TV broadcasters, for future mobile broadband uses), and the development of a new universal set-top box that connects to the Internet and cable service. 

The proposal includes reforms to the Universal Service Fund to focus on broadband access and affordability. It also call for a "digital literacy corps" to help unwired Americans learn online skills, and a recommendation for $12 billion to $16 billion for a nationwide public safety network that would connect police, fire departments and other first responders. 

It strives to put a stake in the ground for standard broadband speeds, with the promise that the F.C.C. will begin assessing the speeds and costs of consumer broadband service. In conjunction, consumers will be encouraged to test the speed of their home Internet access through a new suite of online and mobile phone applications that will be released by the F.C.C. to see if they are getting the promised speeds for which they are paying. 

This move by the F.C.C. comes on the heels of Google, who announced they would offer ultrahigh-speed Internet access in a few communities to showcase what's possible with faster broadband networks. This move by Google was seen as a prod in the direction now being taken by the F.C.C. to make sure that high-speed networks are truly available nationwide.
​
What this will do to the industry of network providers who are currently trying to carve out their place and create business models that will enable them justify the investments that need to be made to create this high-speed network reality is yet to be determined. But it is clear, this move by the F.C.C. will have an affect on public policy for years to come and definitely puts pressure on the network offerings of existing providers. Stay tuned. It is going to be an interesting journey; one that has the potential to bring the best platform we have for sustainable progress, change and action to us all.
0 Comments

Reflections on RSA - Security is Really a Control and Data Management Problem

4/20/2010

5 Comments

 
This week, I spent some time at <a href="http://www.rsaconference.com/index.htm">RSA</a>, an event where security vendors and professionals connect. As I have mentioned in past <a href="http://broadcast.oreilly.com/2009/08/security-paramount-to-the-sust.html">blogs</a>, security is paramount to the sustainability of the network. If we are to leverage the network as a powerful tool for change, we need to be able to trust that the information and resources on it are secure. 

As recent headlines have demonstrated, attacks on the network are ever-present; 2009 saw <a href="http://news.cnet.com/8301-1009_3-10454870-83.html">malware and social networking attacks surge</a> (spam carrying malware was averaging 3 billion each day by the end of the year) and <a href="http://securitywatch.eweek.com/mobile_malware/sexy_new_mobile_botnet_on_the_move.html">increasingly sophisticated mobile attacks </a>emerge. Just as in the physical world, there are individuals motivated by greed, power and personal gain (the <a href="http://searchsecurity.techtarget.com/news/article/0,289142,sid14_gci1389667,00.html">rise </a>and <a href="http://www.federalnewsradio.com/?sid=1891919&nid=19">co-opting </a>of the <a href="http://www.krebsonsecurity.com/2010/02/zeus-attack-spoofs-nsa-targets-gov-and-mil/">Zeus attacks</a>, which originally targeted financial institutions, is just one example - to date it has infected about 74,000 PCs, and that's just one attack), and there are those who are looking to achieve <a href="http://news.sky.com/skynews/Home/Strange-News/Mahmoud-Ahmadinejad-Iranian-Presidents-Website-Hacked-With-Message-Mentioning-Michael-Jackson/Article/201001115514791">political</a> or ideological ends.   

But, as the show floor and conference discusssions demonstrated, there are a lot of technologies out there designed to help organizations combat and mitigate against all these attacks. There are literally thousands of companies, focused on everything from user and data authentication to spyware and cloud security. So why is it that even though there is an answer or feature out there for almost every threat or need, organizations are still struggling to protect the network? I think it's because security is more of a control and data management problem than a feature-set issue. 

I heard Palo Alto Networks talk about controlling exactly what should and should not be allowed on the network, based on the user and their role, the application and exactly what they are trying to do. This approach makes sense because with a focus on control, you can eliminate a lot of the risks right off the bat. You can restrict peer to peer traffic and file sharing applications that can be used by attackers to gain access to the network (through malware/trojans) and all its resources. The key is to have this level of control over every aspect of your network, from the edge to the core and within the hosts themselves, and then, for what is allowed, look for threats and mitigate attacks within that "allowed" traffic.

This gets us to the data management problem; a typical network's security infrastructure contains multiple different devices, each with different management consoles, each producing a lot of logs that can contain thousands of pieces of information. Linking all this data and making sense of it all requires a lot of manpower and expertise. Oh, and don't forget that physical security measures, which can also provide clues and contain indicators of risks, are kept almost entirely separate from the network security activities (typically they are run by two different groups with very little connection, though I did see a <a href="http://www.alertenterprise.com/">company</a> that was trying bridge that gap). 

I think it is telling that it took Google and a host of other companies targeted by attackers originitating in China <a href="http://www.google.com/hostednews/afp/article/ALeqM5jMvzWYB0BvmRgL2ZI0Y4b9I-vBOg">MONTHS</a> to figure out exactly what happened (in fact, I believe the investigation is still going on now). So, under the cover of the data deluge that network administrators are under from all these different security devices, attackers can infiltrate a network and operate undetected.  
​
All of the calls to better manage business information and increase the value derived from insights and analysis of that information (take a look at last week's Economist's special report) need to be applied to network security. Organizations need a singular, meaningful view into the network that helps them identify in real-time what is going on and any threats to that network. To date, I haven't seen big advances on this front, sure there are the large, generic platforms offered by the likes of HP and IBM and security-specific management platforms from folks such as ArcSight. I would love to hear from you if you have seen promise in this area. Right now, I think we need more innovation; we need truly comprehensive visibility and the ability to easily and actively control and manage of the network. The security and ultimate sustainability of the network as a platform for change is reliant on it.
5 Comments
<<Previous

    Sorensen
    ​BLOG

    Insights, musings and calls to action around the network.

    Picture
    Sarah Sorensen
    Sustainability expert, marketing consultant, author.

    Archives

    January 2018
    April 2013
    April 2010
    December 2009
    November 2009
    October 2009
    September 2009
    July 2009

    RSS Feed

Proudly powered by Weebly
  • Home
  • Services
  • About
  • Contact
  • Sorensen Blog