This is all good and wonderful but, hold on, there are a couple of glaring issues. First of all, who controls the controller? The writer talks at length about logging the activity of the super user; but remember, the super user has access to all these logs. You got it, he can delete logs, he can suspend logging, he can even put his hands in the access control system and change the log management. Therefore, if a "super user" wants to cause havoc, there’s really very little you can do to stop him.
Access control is of extreme importance to prevent mistakes and stop unauthorized access; but, aside from those two measures, does barely anything to stop high level crooks who are hell-bent on causing havoc.
When you reach that stage, if your organization permits it, you’ll need 'double keys' ~ access to certain things can only happen when 2 people are logged on. For example with 2 different passwords, known by 2 different users, both of whom clearly understand that if one learns the other's password, his (or her) job is over, there is no mercy.
The other grave issue is that while this is great for large organizations, small organizations don’t (or can’t afford to) have this luxury. Very often, the one same person wears multiple hats, i.e., there’s only one administrator cum super-user and he/she is the IT god. So, expecting to exercise any (let alone optimal) control on this person is, in my opinion, purely wishful thinking.
In almost everything else, though, I concur with the writer, and certainly, I agree with the idea of roles. True, it’s not new but it’s the only one that makes sense. You’re not allowed to access salary data because your name is Joe but because your job title is HR Director!
Still, the point remains, in smaller companies, how do you deal with higher-ups pulling rank and accessing more than they need and/or should have privy to? In addition, for the logging review; a good log management system costs upwards of $20,000 for a small company.
Typically, small companies are already stretched when they need to spend more than $2000 on a firewall, and now, here we are, asking them to dish out $20,000 (or more) for a log management system?
For as much as I’m convinced this is even more useful than a firewall, the sad fact remains that the market needs to come down quite a bit before these devices will become more ubiquitous. Until that happens, I really don’t see any small/medium company spending that amount of money unless some strong regulation compels them to do so.
During his recent State of the Union address, the President made reference to an executive order on Cyber Security, which he signed on 2/12/13. The Order is related to not better specified "critical infrastructure", and here’s an excerpt therefrom:
Sec. 2. Critical Infrastructure. As used in this order, the term critical infrastructure means systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.
So, are Credit Unions vital?
Reading the entirety of the order, one might conceive this to be more related to power grid, water systems, refineries and the likes. In reality though, during his speech, the President made clear reference to the cyber threat our banking system faces. As such, one must assume that, in his mind, the banking industry at large is part of our critical infrastructure, and that if cyber criminals or cyber enemies were to take down our banking system, it would have a drastic impact our national economic security.
I have to say, I agree wholeheartedly.
While I’m not certain that if a cyber criminal were to take down one Credit Union, our entire nation would be at risk; I am quite positive that a concerted effort to endanger the entire banking system would be a disaster beyond measure.
Further in the document, we read:
Sec. 9. Identification of Critical Infrastructure at Greatest Risk. (a) Within 150 days of the date of this order, the Secretary [of DHS] shall use a risk-based approach to identify critical infrastructure where a cybersecurity incident could reasonably result in catastrophic regional or national effects on public health or safety, economic security, or national security ...
(c) The Secretary, in coordination with Sector-Specific Agencies, shall confidentially notify owners and operators of critical infrastructure identified under subsection (a) of this section that they have been so identified, and ensure identified owners and operators are provided the basis for the determination. The Secretary shall establish a process through which owners and operators of critical infrastructure may submit relevant information and request reconsideration of identifications under subsection (a) of this section
Now, don't expect your Board of Directors to receive a notification from DHS any time soon. I highly doubt they’ll be able to identify all the 9,000+ CUs scattered across the US as critical. Personally, I believe that, in 150 days, all they’ll be able to accomplish within the financial sector is make reference to the largest banks (those who are too large to fail, to be clear), and most likely, no one else. That doesn’t mean though that you cannot participate in this program.
There’s a small provision for voluntary submission, and, if you’re recognized as critical, you can still participate in the information sharing.
However, the issue I see with this order is ambiguity on the sort of information to be shared. I’ve glimpsed several recent articles in major newspapers, all with titles such as "the FBI warns Financial Institutions of imminent threats".
If this is the information they’re sharing, they can keep it.
We don't need the FBI to tell us our F.I. are under constant attack; in fact, imminent doesn't even apply anymore - they’re already under a constant, persistent barrage of attacks, and they’re the most attacked because they hold what hackers want most - information that can make them a lot of money, very quickly.
My genuine hope is that the actual information they choose to ultimately share will be much more specific, such that it’ll allow you (and the rest of us) to take proactive actions in defending our networks way before the threat actually strikes.
Having said that, the order is, of course, intentionally generic, and I know all too well it couldn’t have been otherwise. But it does mention the possibility of including security providers in the information sharing process, which would be a boon for companies such as Network Box USA, particularly if the information reveals to us things we do not yet know.
I guess we'll just have to wait and see.
After reading the study posted on Network World (see link above), I felt compelled to put together my thoughts.
This study reports what was to be expected. We’re adopting a technology we’re far from being familiar with, we’re overhauling the way we run our IT, and it isn’t always possible to foresee all the hurdles that crop up, and make appropriate contingency plans.
We are, after all, human.
The part which continues to amaze me is how so many companies find security in the cloud, a challenge. There are many credible and reliable companies out there, Network Box USA included, who also offer their security solution on a virtual scale, a solution that can be virtually deployed in front of a virtual network.
What I’m trying to say is whether you choose a traditional UTM vendor and manage it yourself, or a Network Box managed solution, you’re essentially safeguarded by the same level of protection you would’ve received had the physical device been situated at the edge of your physical network. Truthfully, I just don't see the issue; it needs to be well planned, of course, and properly configured, but it really is no different than what you would’ve had to do in your network.
One thing I can surmise though is that many organizations think cloud = enormous savings. They then start deploying, and need a virtual machine for the firewall, one for each server, virtual switches, and so forth. Once they put everything together, the cost isn’t quite as small as they’d expected. The technology is new, it is promising and it is the future; the more companies adopt it, the more the prices will go down - and that’s the only thing we can hope for.
Click on either of the two links above to read up on the topic of targeted hack attacks on media outlets. In all candor, I find this, to a certain degree, rather amusing.
One of the articles says, "[It] all pointed to being hacked by the Chinese. They had the ability to get around to different servers and hide their tracks." Thing is, if they had the ability to hide their attack, how does the writer know it came from Chinese hackers?
One of the principal skills of hackers is that of hiding their tracks. You never really know where they’re coming from and it’s incredibly rare for them to make a mistake, and be caught. It’s common knowledge that the apparent originating IP of the attack is almost never the real one, it acts as a decoy, leaving a false trail. Instead, both these articles talk about Chinese hackers, most likely tied to the government, most likely tied to the military. How do they know all this? THIS is what we would like to know.
And yet, in all fairness, I won’t be at all surprised if they were right. We all know the Chinese government has a total disregard for human rights and civil liberties, and that media in that country serves no other purpose save as the megaphone of the power; and, therefore, freedom of news is a concept completely foreign to them, that the right to freedom of news we enjoy must surely bother them, particularly when we publicize things about them that they’d rather keep quiet.
Nonetheless, to go from this (in)famous known fact to claiming with utter apparent certainty that it was indeed the Chinese military which hacked into the NYT and TWP, to spy on their news, is a completely different story. Again, I want to know how we knew it was them? Did they leave a taunting message? Something like "gotcha!"? Was some form of threat issued?
If all they did was ‘get in, spy and leave’, this is pure speculation, and could very well have been the actions of anyone. Heck, it could very well have been that the 2 papers spied on each other, and made it look as though it was the work of hacker attacks from China. Who’s to know the truth??
I know this sounds ridiculous, and I am intentionally exaggerating.
On that same note of flippancy, the article claims the NYT blamed Symantec for not catching the Trojan. Then, on the flip side, they’re also claiming this as a targeted attack. Seriously? Anyone can instantly deduce that the two things are in complete contradiction.
If it is a targeted attack, it means the hacker wrote the Trojan for the sole purpose of infiltrating the NYT network; therefore, it couldn’t have been a "common" virus available in the wild ~ Symantec couldn’t possibly have performed a miracle in surmising this was coming and dreaming up a signature.
If we were truly expecting a signature to stop the original Trojan, then, clearly, we’re admitting that this was a Trojan available on the internet, for which AV companies could have had a signature, and hence, it wasn’t targeted. The hacker just got lucky that in this particular instance, his virus hit the NYT and once he gained access, he curiously started snooping around. Thus is human nature.
So - which is it? Targeted? Not Symantec's fault? Symantec's fault? Not targeted? I think someone really needs to make up their mind here.
Therefore, to be able to say that this was done by the Chinese with such preposterous certainty, requires hard, legitimate proof. Proof that I would like to see. If this exists, this is an act of war and we should take counter actions. If it does not, just please stop speculating already, keep quiet, fix the security of our networks, teach our users to not click on stupid links in unknown emails, adopt a safer behavior on their computers, and STOP_CRYING_FOUL.The point of the matter is hackers don't leave tracks behind; much less skilled hackers (unless, of course, they start getting cocky and make mistakes).
It would appear that Sandy is an example of the new normal. We had a similar situation in 2011, and, for all we know, this could well be how things are from hereon. As weather patterns change, hurricanes get pushed further away from the Golf and up along the East Coast. Inexorably, it’s time to start thinking of this in terms of “it may happen again” rather than “oh, it’s a once in a century flood situation”.
With that in mind, let’s review the security of your network in the realm of business continuity. Obviously, if you’ve lost power, no one can hack you; but if you haven’t, and, thus far, all you have lost is your hardware, then it’s likely that you’re having to rebuild your security. Do you have a backup of your most recent firewall configuration? How many security devices did you lose? Will you be able to put your company back up and running, quickly and securely, to a ‘pre hurricane’ status quo? And, if you’re needing to rebuild some servers from ground zero, isn’t the firewall the last thing you’d want to have to deal with anyway?
This is where managed security can add value to your efforts.
Among many other things, your provider would have backed up your configuration, properly and safely. They will be able to restore it onto a new device within minutes and have a replacement appliance shipped and delivered to your doorstep as quickly as UPS can manage it (well, at least, that’s how we take care of our Network Box customers).
They should act as part of your team and, while you dedicate resources to rebuilding servers or assisting your own end users, they ensure your network stays protected and you don’t incur the additional crisis of being hacked simply because some unscrupulous hacker took advantage of your moment of weakness.
There is truly a lot managed security can do for you and your organization, certainly a lot more than what has been detailed within this post, but I’ve endeavored to list the more important examples.
If you have any specific questions or concerns relating to the robustness or adequacy of your network security, particularly in times of natural disasters (or during any crisis, for that matter), please do not hesitate to leave a comment here.
Have a safe and secure start to November.
This is all very true and, to be candid, I’m not in the least bit surprised by these findings.
To be clear, the way it works today for the legitimate world is that someone (ethical hackers, researchers, Microsoft labs themselves) studies the code and finds vulnerabilities. They then report these issues publicly in the hope that this will force companies to fix those issues quickly. While some companies do that, others don’t, and due to two main reasons.
The first being economical – it costs money to dedicate a task force to fixing bugs. The second lies in the fact that a vulnerability per se is not necessarily an issue, that is, until someone demonstrates that an exploit is possible.
In fact, what we should be discussing here is exploits rather than vulnerabilities. An error in the code doesn’t necessarily lay it bare and exposed to attacks ~ at times, exploits aren’t even possible ~ therefore, fixing that vulnerability is a moot point and a potential waste of money. This is why the vast majority of companies adopt a “wait and see” attitude, to ascertain if the exploit is actually possible and how hard it is.
My perplexity with this article is not whether the issue is revealed to the public or it isn’t. I’m more focused on the fact that professional hackers are already doing all this research on their own. Yes, it’s true. They’re already fully aware of the vulnerabilities, are most likely exploiting them, and aren’t telling anyone because the longer their findings stay secret, the more money they can make. Hence, shouting out loud when we find something doesn’t give an extra edge to these hackers because, very frankly, it’s old news for them.
That said, it does give us ammunition to go back and demand a fix from the manufacturer, who might otherwise never even attempt to fix the issue, even when it’s been made known to the public.
I was able to share my views with SC Magazine in an article posted online yesterday evening. If you have a few minutes, I invite you to read it here:- http://www.scmagazine.com/zero-day-attacks-last-much-longer-than-most-would-believe/article/264104/
Until our next blog post, have a good one.
We last talked about the pros and cons of implementing a BYOD policy in your organization. This concluding part expands upon how companies can aid in mitigating those risks, either by making changes to their BYOD policy or to other policies within the organization.
Now, if I were to choose, I’d allow smart phones and tablets, while maintaining an exceedingly cautious stance where “bring your own laptop” is concerned. In any case, nothing should connect to the company network without protections of every possible kind; encryption; VPNs; proper AV. No matter what you do, never allow a connection without first ensuring that all these protections are up-to-date and in place.
Data at rest should be encrypted as much as when it is in motion. For instance, if a device is lost and the data on it cannot be read, all the better for your security.
Hundreds of thousands of these devices are lost each year. I highly doubt you’d want the headache of changing all relevant certificates should your smartphone be lost so, identify how these certificates can be managed in such a way that losing a phone will not prove catastrophic.
I also cannot emphasize strongly enough how imperative it is for employees to sign a document from the onset, acknowledging your right as an employer, to remotely wipe their devices should they be lost or stolen. Yes, even if this means deleting all their children’s photos ~ in the event this becomes necessary. Once a device is used for business, security must take priority over the preservation of personal data.
At this point, you’re very likely wondering about the types of infrastructure or software solutions a company could, or, I’d say, should invest in, to adequately support a BYOD policy.
This truly depends on the size of the company and how many such devices you will have. You need software to successfully control what is installed on these devices – personal or not, you can’t allow random software that could compromise your security; you need AV; you need encryption and VPN. You also need a software which ~ if possible ~ can control where the devices are, to ensure they are with their legitimate owner (should one of them end up thousands of miles from where it’s supposed to be, you immediately know you may very well have a problem on your hands).
You could ultimately end up needing an in-house infrastructure mimicking that of what Apple built for iTunes – something from which to download company apps. That said, this is a costly exercise, and while this makes sense for large corporations, it is impractical if you only have 20 devices. In which case, you could conduct a personal device check to ensure they’re installed according to security policies.
When connected into your network, these devices should be on a subnet of their own; wireless, protected, and with a special firewall ~ routing and scanning rules must be implemented to ensure they’re controlled, with any compromised device immediately spotted and isolated.
NAC is all the more important for these devices.
As to whether there are situations wherein a BYOD policy should simply not be instituted, or if there are specific companies which may not be a fit for BYOD, I would say this really depends on the level of confidentiality of your data, and how much control you want to retain over it; over the devices handling it.
The unfortunate issue here is that the people who, I firmly believe, should have it the least (C level executives, for instance), are, most likely, the ones to end up having it simply because they will demand it. These are the ones, without even realizing it, predisposed towards bypassing every security and IT policy. This is, undoubtedly, something which puts the company at risk, particularly because of the level of confidentiality of the data they typically handle.
In closing, if I had to summarize everything into two, maybe three, tips related to BOYD which companies should follow in order to implement the best possible policy, I think we can refer to what has been expanded upon above.
One, here’s a big no to using plain text and unprotected connections. Two, use VPNs and encrypt the disk. And three, ensure that not only are you using an AV, it must be kept up to date.
So if you’re contemplating the pros and cons of BYOD, and whether a Bring Your Own Device policy is right for your corporate environment, this one’s for you.
Let’s begin with the pros.
First is cost reductions for the company – based on the presumption that cost of the device and maintenance thereof is sustained by the employee. Next, the psychological aspect that since this is “my” device, it is always with me but, oh wait, I have a work email, do I reply, do I ignore it because it is after hours? Most of us do reply so we end up tethered to the office 24/7/365 – bad for us, good for the company.
Negative aspects – there are many, and predominantly in the realm of security and confidentiality (for the company I mean, and for the employee, the negative aspects far outweigh the advantages and that is one reason why I personally do not understand this trend).
So what should you do?
Evaluate the situation. Weigh the positives against the negatives.
As with every business decision; there are risks, there are possible legal, HR and personal considerations, and likely more. List also the presumed advantages; put them all on the table and measure where the scale tips. I won’t even attempt to develop that list; there is a risk analysis that needs to happen here, and that varies for each company.
Next point to consider would be some of the inherent risks of allowing employees to use their own smartphones, tablets, and other mobile devices in the company.
We need to distinguish the many aspects of this.
Bring your own laptop is not quite the same as bring your own smart phone. Smart phones and tablets are single user devices with no user privileges. You can’t logon to them, and they are not connected to any active directory. Hence, the potential damage which can arise therefrom, albeit real, will never be as much as what could happen with a laptop – escalation of privileges, mapping of server drives, login to remote platforms and so many other things can be done with a Trojan-infested workstation.
Smart phones and tablets, at the moment, present more of a threat for the user than for the company ~ most of the attacks are aimed at stealing personal information (online banking, for instance) or signing the user up for paid services which, in turn, rack up high phone charges unbeknown to the actual owner/user. That said, a password stolen from a smart phone could, of course, be an issue for the business. For instance, an SSL certificate stolen from a tablet could allow a remote attacker in through a VPN. I personally have not seen all this yet, and I am considering these more theoretical at this juncture.
But when it comes to laptops, there is no theory. A laptop can present a very dangerous mode of attack, and the company should have full control of it.
Next week, we will discuss how companies can help mitigate those risks ~ either by making changes to an existing BYOD policy or reviewing some key factors that should not be overlooked when developing a BYOD policy for the organization.
Enjoy your weekend.
I was recently asked this question ~ Which are the top apps for network admins to blacklist from enterprise networks?
Quite frankly, I think it’s wrong to look for the “top apps”. That said, organizations are going in that direction because they’re thinking in terms of application recognition, but in reality, it’s far more efficient to review this issue from a “type” of data standpoint.
First of all, our statistics on 5,000 installed Network Box devices (on a global scale) show that 90% of Internet traffic is HTTP/HTTPS. Spending too much time to block anything else will gain you very little; yes, it might be useful in terms of security, but no, not really in terms of bandwidth.
Of course, a good proxy will recognize if the traffic flowing on ports 80 and 443 is indeed HTTP/HTTPS or not; and any other port should be closed or well controlled (specify the source and destination wherever possible).
Further with our statistics, we see that of this web traffic, YouTube.com, Facebook.com and Twitter.com combined chew up no less than 80% of bandwidth, when allowed.
One aspect many fail to consider is the incidence of Microsoft updates; the larger your organization, the more fundamental it is to use an update server; you simply cannot allow 1000 computers to download 100MB of updates every month; it will kill your bandwidth! An update server allows you to download the updates only once, and then distribute them internally as appropriate. Microsoft updates from the Internet, without a local update server, usually account for another big chunk of Internet usage.
Streaming is bandwidth intensive as well, and should be blocked and well controlled. If you block Streaming and your web filtering database is half decent, you’d already have blocked Netflix, Hulu, Blockbuster and the like.
Do you allow Facebook per se but block the apps and games? Do you want to allow Skype?
Once all this is done, you can begin worrying about the “Apps”.
Do you block Youtube but still allow a few selected channels? (do note though that if you do this, you’ll still need to allow ytimg.com, which is where YouTube maintains the images).
The question here is, do you really want to “make a list”?
I personally believe that if it’s not business related, it should be blocked. However, if your company policy is such that you cannot block them, then perhaps a review of the policy as well as a lengthy chat with HR are called for. Recreational use of the Internet at work is irrefutably costly (for the company) but only a good HR policy can determine how much of it to allow and when.
So, which apps do YOU think should be blocked from the workplace?
One of the major drivers, in my opinion, is the adoption of the cloud. But the problem is, how do I manage user identification both in my own network and in my cloud without having to duplicate efforts? How can I be assured that the iPad being used to access company’s data in the LAN and in the cloud is legitimate, used by the actual and legitimate user, and all this without having to manage identities in 3 different places? And without asking the users to enter 3 different passwords?
In a way, this is an extension of the single sign on issue (never truly resolved completely); now I want to identify my users wherever they are, whichever device they are using, whichever server they are trying to access, local or in the cloud. The scale of the problem is rather daunting in some cases. Some major software vendors offer solutions that are specific for their own environment; for instance, you can get AIM for Oracle, AWS has its own version to integrate your local network with their cloud solution, etc.
HIPAA, SOC and PCI are forcing the hand on this issue as well, as these regulations require that access to data be closely controlled; the systems handling data must be able to account for WHOM is accessing that data. And again, IT departments do not want their users to get frustrated having to logon multiple times to multiple systems; they aim at having one place to identify users and correctly grant access data only on an as-needed basis, which is also called role based access – access only to the data your job requires you to have access to.