Support Free Internet - Stop SOPA and PIPA

Stop SOPA and PIPA: We openly declare our support for the efforts to prevent the ability for governments to police the Internet.


Kudos to Wikipedia


Talkback and comments are most welcome

Related posts

Privacy Ignorance - Was Eric Schmidt thinking?

Failed attempt at optimizing InfoSec Risk Assessment

Last weekend I got into a discussion with an insurance supervisor on the topics of risk assessment. He explained the process of work of actuaries in insurance, and that there are standardized tables of probabilities for an event to occur, like sickness and death, and how it is used to calculate insurance premiums.

After digesting the explanation, my reaction was that I found the holy grail of the Information Security Risk analysis: All it takes is for enough amount of incident event be collected into a statistical table, and all possible types of information security incidents will have a standardized table of frequency and impact - no more assessments over the entire organization!

And in such a great and utopian solution, at least a quarter of the time the information security personnel will fell like they are doing actuarial jobs.

But I was quickly brought back to reality by the expert in insurance, with a good question: Actuarial tables are compiled based on information that is mandatory to be published - illness, fires, theft, even death. How will you collect accurate information from information security, when it's not mandatory to publish them?

And he was perfectly correct: Collecting information to compile an actuarial table for information security will be impossible. There are very few companies in the world that will release any information that there was an information security incident if it hasn't impacted the public in a very obvious way. Also, the value of the impact is calculated in any number of methods, and different items are included in the value, making the valuation of the incident an incomparable attribute from one incident to another.

Having a standardized method for risk assessment in information security based on hard numbers would be great. But since the factors included in any incident are very complex and varying, and also consistent incident reporting is nearly impossible, we will be sticking to the current qualitative methods.


Talkback and comments are most welocme

Related posts

Example Risk Assessment of Exchange 2007 with MS TAM
Risk Assessment with Microsoft Threat Assessment & Modeling  

The Difficult Life of Mac in the Mixed Environment

Just before the sad event of Steve Jobs death, obtained a MacBook. While everyone is still immersed in reading the biography, we embarked on the journey of using a new OS for the first time. Here are the positive experiences and gripes that we found when using it in a multi-purpose multi-platform environment.

Please note that we are just starting up using the Mac, and some of our issues may have solutions that we haven't found yet.


The environment
The MacBook arrived in the very mixed environment of Shortinfosec

  • Domain - an active AD Win2008 functional level domain, but used only for testing. The computers are only added to the domain to do research related to the domain.
  • Computers - Work is done on our laptops - HPs, Lenovo and Acer running Windows 7, Vista and Ubuntu.
  • Virtual environment - Virtual Box and VMWare player based virtual machines, mostly bridged network
  • Network - 802.11 n Wifi and wired 1 Gbps Ethernet network. Cisco and Huawei network elements
  • VPN - Cisco IPsec VPN for remote access
  • Storage - iSCSI based storage server, built around an Openfiler storage server, on the wired LAN segment
  • Printing - a very old HP LaseJet printer, so old that we have to use a Centronix to USB convertor, so we attach it to any laptop we need.
What we do on this environment:
  • Testing and honing skills of attack tools
  • Running test scenarios on corporate products
  • Active Directory fiddling and trying to break
  • Playing games
  • Blog management
  • A lot of article and paper writing
  • Java development
  • Odd accounting jobs
  • Lots of games ;)

The positives
We like to start on a positive note, so here are the things we like about our Mac
  • User experience - as Steve Jobs insisted, the user experience of working in Mac Applications on the Mac is seamless. Everything just runs. Even attaching external hardware a 20 year old printer was a breeze - much easier then doing the same on Vista.
  • Battery life - the battery life is simply outstanding. The commercials say that the Mac can do 7 hours on battery, and that is quite true, for working in word processor, at 65% screen brightness.
  • Portability - not really comparable, since all other laptops are 15'', but the Mac is very easy on the shoulders, and an excellent companion at meetings.
  • Speed of functions - all implemented functions within the OS are implemented VERY WELL. For example, the Cisco IPSec VPN connection using the native Lion client authenticates at least 10 seconds faster than the Cisco VPN Client for 64bit Windows 7 (we actually measured)

The gripes
Naturally, not everything is that great, and here are the frustrations that we faced with our Mac.
  • The keyboard shortcuts - putting an IT pro who worked on a PC and Unix for 20 years in front of a Mac running OSX is a special kind of hell: NONE of the keyboard shortcuts are the same, and it a significant effort to shift to OSX shortcuts. They are not illogical, only completely different, which hampers productivity for anyone used to do much of their work on a keyboard.
  • Interoperability with other platforms - There are interoperability gripes with a lot of stuff. The Mac can join an AD domain (sort of), but we had a lot of stress getting the Mac to use cached credentials. Mostly the same happened with a Linux based LDAP service.
  • Software is missing - A lot of productivity software that we are used to is missing for Mac - we stumbled on Visio, then on MS Project, then on Notepad++, then on 7zip... We didn't go into developing Java in Eclipse, because of the following point. Mind, there are replacements for most of the software we were missing, but productivity was hampered since we needed to find the appropriate software, buy it and learn how to use it. VMware player is nonexistent for Mac, we are limited to VirtualBox.
  • Lacking native support for obvious items - first disaster - no support for NTFS write. We had to revert to the dreaded FAT32, which was a deal breaker for development. As if that wasn't enough, iSCSI is not natively supported, which further killed any attempt at accessing the large Java codebase on our iSCSI fileserver.
  • Remote access - So far we haven't discovered an efficient native tool to access and work on our Mac remotely. The Apple Remote Desktop is a shameless highway robbery - why should any company or user need to pay any money to access and manage a single Mac remotely? We are at the moment trying out VNC, which is not a very preferred platform.
  • No Native or Free Disk Encryption - (Updated, thanks to comments on reddit.com). Up to OSX 10.6 only Sophos SafeGuard provided full disk encryption for a Mac. For OSX 10.7 there is FileVault full disk encryption, but we haven't tried it.


Conclusions and thoughts
We are not abandoning the Mac - it is a great tool and an asset in our little lab. But in the current state of things, it takes a lot of effort and compromise to fully migrate to a Mac platform, especially since a multi-environment knowledge is required.

If today someone asks us whether a Mac is a good idea for company use, we would not be very supportive
for the following reasons:
  • Business Software lack of compatibility
  • (Updated per the comment of Ryan Black) Incompatibility with writing to NTFS filsystem (which is everywhere) (previously stated NTFS fileservers - fileservers are accessed through SMB, which is supported)
  • Learning Curve for efficient use


Talkback and comments are most welcome


Related posts
Information Risks when Branching Software Versions
8 Golden Rules of Change Management

Choosing Data Storage - A difficult dance

IT has come a long way in the past 15 years, and definitely has advanced into the realm of commodity service. But there are still complexities under the hood of this commodity service. One of the most underestimated in complexity is data storage - it is taken for granted by everyone. For example, i frequently talk to a high ranking manager in a software company and he constantly states that all that is needed is another disk.



At the end of the day, data storage is very far from simple. Every organization needs to provide storage service for it's requirements. But storage is not only capacity, and one must be careful when choosing the appropriate solution for storage. There are three basic options at the moment:

  • Cloud storage services
  • Open Source based storage systems
  • Commercial enterprise storage systems

We will evaluate each service from the following key parameters of a storage system

  • Capacity - The first (and usually only) thing we think about when we talk about storage - and the easiest to achieve. Regardless of option for data storage, capacity is upgradeable. In open source storage systems which are based on commodity hardware, upgrades are limited to the abilities of the host server/box. The enterprise systems are much more upgradeable, but at high costs. For a cloud storage provider, capacity upgrade is nearly infinite (at least on paper). It is wise to plan ahead and consider whether future ability will support your requirements.
  • Input/Output Operations per Second (IOPS) - The usually forgotten and very difficult to assess parameter, but nonetheless very important. The IOPS should present the amount of operations that the system can perform on a storage within a time-frame of 1 second. But since read and write operations on a storage can vary (sequential or random, read or write, even there are front-end and back-end IOPS when using RAID configurations). Cloud storage services do not publish IOPS, Enterprise manufacturers always publish the IOPS number that is most beneficial to them and the open source solution mostly leaves the IOPS to the builder of the system. In any case the end result is, DO NOT TRUST THE NUMBERS. There are some nice estimation calculators online, like wmarow's iops calculator, but use them only for reference. The smart solution is to test the storage service in a configuration as close to the one you wish to use, and assess whether performance is acceptable.
  • Access Bandwidth - This is not disk bandwidth, which is calculated via the IOPS. The access bandwidth is the bandwidth between the server and the storage itself. Naturally, you want this to be as high as possible. For enterprise storage systems, discussing access bandwidth is moot, since such storage is mostly connecting through Fibre Channel which has multiple links of 2, 4 or 8 Gbps. For open source storage systems, which are mostly iSCSI based, the access bandwidth starts with 1 Gbps with Ethernet overhead. For cloud storage services, access bandwidth is a significant factor - cloud services are accessed through WAN links, where access bandwidth is limited and may be prone to congestion. When choosing a storage system, test your application with the bandwidth you are planning on using.
  • Redundancy and high availability - What kinds of failures and incidents can a storage system survive? Cloud services claim that they can survive a lot - short of a cataclysmic event or a nuclear bombing - but such claims should be tested. Enterprise storage systems are designed to survive nearly any hardware issue within them, and provide abilities to replicate to other systems which are at a distance of tens of kilometer (naturally, at a high high price). Open source storage systems redundancy is dependent on actual hardware redundancy of the box the customer built, and provide some technologies for replication, which are in a different level of maturity. Always consider placing the data based on the importance to the company - can you survive without it?
  • Actual hardware - storage systems are comprised of well known components - hard drives, controllers, interfaces, power supplies. For both enterprise storage systems and for cloud service the customer does not need to bother too much with the hardware - the provider constructs and combines the required hardware. On the other hand, when preparing an open source storage, the customer usually builds the hardware which means finding appropriate hard drives, RAID controllers, redundancy in power supplies, caching mechanisms, LAN and FC interfaces. Building a system from scratch is a great experience, but commodity devices may be prone to much more failures then specially built hardware. Testing is not very useful here, but think ahead of the very possible risk of failure of commodity components.
  • Reporting - Once the storage system starts working, reporting becomes an immediate issue. The customer will want to know the load on the system, on individual hard drives and logical devices, response times, utilization trends etc. Again, enterprise storage systems shine in this area with an excellent portfolio of reporting tools, albeit usually with exorbitant prices. Cloud storage services may provide some reporting but not too in-depth, and the open source systems usually lack poorly, since the open source project is focused on functionality, not reporting. When choosing any storage system, always ask to look at the live reports from the service/system you are planning on using.
  • Support - Again, once the storage system starts working, there will be problems. And I guarantee you - the problems will not be simple: either it works or it doesn't. There will be all kinds of complicated and seemingly impossible combinations of issues. And this is exactly where the customer will need support. But there is no clear-cut answer to which type of storage system has the best support. One must tread carefully here, because good support is about having trained support personnel, but also having very dedicated support personnel. By definition, enterprise storage systems have a great advantage in this area, but this advantage can easily be ruined by a support team that juggles many projects, is used as presales or is simply not dedicated to supporting a customer. Cloud services fall in much the same category, but it can be difficult to discuss storage issues with a cloud storage service: the engineers are impossible to reach, there is insufficient data to support an issue (reports, analysis) and the cloud service provider has usually a well crafted SLA to protect themselves from most issues. The open source systems are an issue of support in a different way - since the systems are built with software which is written by many, there are rarely any real experts to support such a system, unless you pay someone - and even then it may be a risk.
  • Vendor lock-in - Cloud storage services are the strongest player in this area - if the customer chooses a cloud storage system as an important part of your infrastructure, it will adjust it's operation to the cloud system and create a 'symbiotic' bond, thus making the migration very costly. Enterprise systems are much easier to migrate from, since they are basically just huge hard drives. If all else fails, an operating system level copy command will provide a very crude but always successful migration. Open source storage systems have no lock-in: simple hard drives, where migration is a copy-paste operation.

Conclusions
There are multiple pros and cons across our storage systems parameters, but at first glance, the enterprise storage systems have the upper hand. Bear in mind though, such systems always come with exorbitant pricing, especially on any upgrades after the initial purchase. Therefore, such systems may be well suited for the mission critical applications, but are too price prohibitive to be used for every and any use within a company.
The cloud services are extremely flexible in expansion capacity and redundancy (at least on paper). But quality of service and support may be lacking, as well as issues in speed of access. So cloud based storage may be only logical if you rent the full package - server plus storage in the cloud, to guarantee an overall service level. The remaining issue is lock-in: once you start using a cloud provider, leaving it may be a challenge, since you have adjusted your operation to it's service and it may be costly to shift providers.
The open source systems are an interesting project, and can provide a very cheap solution for a lower tier functions. But in order to actively use such a system would mean to dedicate an employee or a team of homegrown experts on the open source storage system, to properly support the system. Also, redundancy and high availability can become an issue in such systems.

In summary, do not choose only one storage solution: The enterprise system is well suited for the business support, but it is a huge overkill for a test or proof of concept systems. Cloud storage services are a good choice for a cloud based infrastructure, but the lock-in issue requires careful strategic approach before lock-in occurs. So use everything, and always evaluate any solution for at least 3 months before committing to it.


Talkback and comments are most welcome


Related posts
RAID and Disk Size - Search for Performance

Designed by Posicionamiento Web