Essential Management Semantics - Responsible vs Accountable



I've had a discussion at the office about who is responsible for a certain activity. And as expected, the junior colleagues got into a discussion of who is more and who is less responsible for the activity. The Information Technology Infrastructure Library (ITIL) defines two distinct roles: 
  • Responsible and
  • Accountable

If you open Websters dictionary (www.websters.com) and look up the adjective "responsible" you get the following description: answerable or accountable, as for something within one's power, control, or management
If you do the same for "accountable" here is what you get: subject to the obligation to report, explain, or justify something; responsible; answerable.

It is a common sense to assume that "accountable" and "responsible" are synonyms. But both in Management and IT their meaning differs slightly and that makes all the difference:

Accountable is the PERSON (singular) who answers for the entire set of results in a performed activity or process.
Responsible are the PERSON or PERSONS (singular or plural) who answers for the quality of a subset of tasks performed within an activity or a process.

So, there can be many responsible persons for the proper performance of a process, but should ALWAYS be only ONE person accountable for the entire process. 

Bonus Question
Q: When something does not get done right, who gets blamed. The Accountable or the Responsible:
A: The Accountable has the task to identify which Responsible is failing his job and take measures to fix the problem. In the long run however, if the problem is not fixed and the entire process fails, the Accountable will be called to answer.


Related posts

Talkback and comments are most welcome

iPhone Failed - Disaster Recovery Practical Insight

A lot of Disaster Recovery procedures are considered failed simply because they took longer then originally planned and documented. And a lot of these procedures take longer not because of poor equipment or incompetence. On the contrary, they take longer because the responsible people are focusing primarily on the effort to fix the problem. Here is a practical example:

On Tuesday my iPhone failed. And since its warranty is long gone i decided to fix it myself. I finally got it fixed at Wednesday night.

In my zeal to repair it, I forgot the first rule of business continuity - recover functionality within acceptable time frame. And for iPhone, just for any other mobile phone, the main functionality is TELEPHONY!!! I was unavailable for the most part of Tuesday and during parts of business hours on Wednesday.

In the end, the problem was solved, and my iPhone is working again. But then all missed calls came raining down, and that kicked me back into reality, and gave me a real perspective of what I needed to do: find a low end replacement phone instead of meddling with low-level format, firmware flashing and DFU modes. That way, I would have been contactable, and be under much less pressure to quickly fix my iPhone.

In perspective, the same behavior can be seen in many organizations during IT disaster recovery. Disaster recovery is organized and coordinate by IT people - mostly very capable engineers. And yet, a large number of Disaster Recovery actions are delayed by the effort of these good engineers focusing primarily on fixing the engineering problem - not fixing the business problem.

In a Disaster Recovery situation, the timer of recovery is known as Recovery Time Objective (RTO). That is the time interval starting from the moment ot disaster in which operation must be recovered to limited but essential functionality.

A good DR manager - regardless of his position and education does his work with a stopwatch. The time he can allow the engineers to try to fix the problem does not have a formal name so let's call it Fixing Time. It is the time difference between RTO and the tested time required to activate the Disaster Recovery systems.
Once this Fixing Time passes, Disaster Recovery preparations must start. If the problem gets fixed before completion of DR system activation, all is well. If not, RTO has been met. Oh, and the engineers can relax from the urgency pressure and work on fixing the original problem for as long as it takes

Back to my iPhone example - what was my timing? A phone RTO should be the recharge time - 2 hours. Getting a replacement phone is a walk to the store and buying the cheapest prepaid model or borrowing a spare form a friend - 30 minutes. So I needed to keep my cool, and try to fix the problem for only 1.5 hours before looking for an alternative. After that, I could have spent a week on the iPhone - no pressure to fix it fast.

Related posts

3 Rules to Prevent Backup Headaches
Business Continuity Analysis - Communication During Power Failure
Example Business Continuity Plan for Brick&Mortar Business
Business Continuity Plan for Blogs
Example Business Continuity Plan For Online Business

Talkback and comments are most welcome

Cloud Computing - Premature murder of the datacenter

Last week Amazon announced it's new cloud computing service - The Amazon’s Elastic Block Store (EBS) . It's a remote storage service, with excellent storage/cost ratio which is even advertised as replacement for large storage systems of the enterprise. Naturally, the ever controversy seeking journalists hurried to declare time of death to the enterprise data center and included this view:


Though most businesses are quite comfortable in using external utility
services for electricity, water, and Internet access — and we even use banks to
hold and pool our money with others “off site” — we are still largely unready to
move computing off-premises, no matter what the advantages

It is correct that certain elements are used as external utilities, but let's compare services from a realistic point of view
  • Electricity as a service - because everyone is entirely dependent on electricity, the grid itself is designed to be resilient, have fast fail over time, survive major catastrophic events at power plants or within the grid, and even re-route additional supplies from other countries if need be, at horrible costs but it does work! Oh, and for the simple case of a grid glitch, we'll spend a $500 on a UPS and another $5000 on a diesel generator and we're all set!
  • Data storage as a service - For data storage services, information is needed here and now - exactly like electricity. If we are to outsource our cloud information storage to a provider, that may be well and good as long as it works. However in the information security world, there are three key concepts. Our cloud data storage must guarantee commensurate levels of
    • Confidentiality - in cloud computing location is an ambiguous concept. So data will exist on different storage elements, at different physical locations, will traverse millions of miles of physical networks not related to or in any way responsible to the customer, as long as it's there. Who will guarantee that confidentiality is maintained? Oh, and I forgot - you ACCESS the data via the Internet. Whenever a confidentiality breach does occur it can always be blamed on your Internet connectivity and breach of security at the access provider, not the storage service provider
    • Integrity - will probably be maintained, since there are very simple ways of doing comparison and keeping a small subset of control information with each set of data - as long as fragments don't get lost, in which case we have a problem of...
    • Availability - in cloud computing information is everywhere, and gets collected and presented at the user's request. If for any reason this data cannot be reconstructed and verified it is lost. And again, the access to the information is through the Internet - which is not service with guaranteed availability, since it depends on international mesh network controlled by a multitude of independent entities. Unless you spend top dollar on dedicated data links nobody will sign a strong SLA for Internet access - it's impossible to achieve.

But why don't we have a local backup, just like the UPS? Of course we can, it's known as an enterprise data center!

While there are strides made in the right direction of cloud computing it's current level of usability is restricted by the "best effort" concept of the entire network on all sides. So the users of cloud computing are the ones that find it acceptable to:

  • have delays in access to information
  • have some data lost and
  • information leakage will not make a significant impact.

In the meantime, the enterprise data centers are still humming strong

Related posts

Datacenter Physical Security Blueprint

3 Rules to Prevent Backup Headaches

Talkback and comments are most welcome

Fedora Servers Compromised

According to this announcement from yesterday, Fedora servers were compromised.

Here is a scary part of the announcement:

One of the compromised Fedora servers was a system used for signing
Fedora packages
That particular server had very little to do with Internet, and should have been properly isolated, even on a completely separate network from Internet accessible servers.

So, the readers should be careful with the current Fedora distro and packages download and install. I would wait for the next official release.

This event goes to show that large companies, regardless of industry can make poor security choices. And because large companies with high profile are a great publicity target, these poor choices are easily found by hackers

Anyway, respect to RedHat for the announcement. A lot of companies will simply sweep such an event under the rug.

Related posts
Portrait of Hackers

Talkback and comments are most welcome

Competition Results - Computer Forensic Investigation

The Computer Forensic Investigation Competition is closed, and here are the results

What was there to be found:

  • Tshark sniffer - part of the wireshark suite in /moodle/enrol/paypal/db
  • NetCat tool for backdoor creation - renamed as MyTool.exe - in /moodle/auth/ldap
  • An MP3 of Sergio Mendes & Brasil 66 - Mas Que Nada renamed as html document - in /moodle/auth/imap
  • A TrueCrypt rescue disk ISO renamed as MyDoc.doc in /moodle/lib/geoip/Documents/
  • OSSTMM Penetration Testing Methodology with penetration details in deleted file osstmm.en.2.1.pdf in /moodle/enrol

Finding the above was suffucient to win the competition. Alternatively, instead of OSSTMM you could find the below two items

  • A decoy metasploit developers guide pdf in /moodle/lib/geoip/Documents - actually, that document has nothing to do with direct hacking unless you discover the
  • metasploit framework remnants of a deleted metasploit framework in /moodle/lib/geoip/Documents

Who did the investigation (in chronological order of reporting the findings - earliest first)

  • Lawrence Woodman - Found 4 incriminating pieces of evidence. Missed the real penetration tutorial and focused on the dummy - Metasploit.
  • Tareq Saade - Found 4 incriminating pieces of evidence. Missed the real penetration tutorial and focused on the dummy - Metasploit.
  • Bobby Bradshaw - Found 3 incriminating pieces of evidence. Missed both and the dummy penetration testing documents (Metasploit and OSSTMM) and missed the Truecrypt Recovery CD Iso
  • Daniele Murrau - Found all incriminating evidence. The utilized toolset is Autopsy as part of Helix distribution
  • Lesky D.S. Anatias - Found all incriminating evidence. The utilized tollset is PyFlag and Sleuthkit

Other Participants - did not qualify for final review because they did not send details of methodology nor findings (no particular order)

  • Phil (no last name) - reported finding 2 pieces of evidence, but did not send methodology used nor details of findings
  • snizzsnuzzlr (obvious nickname) - reported finding 5 pieces of evidence, but did not send methodology used nor details of findings
  • Fender Bender (obvious nickname) - reported finding 3 pieces of evidence, but did not send methodology used nor details of findings
  • Sniffer (obvious nickname) - reported finding 2 pieces of evidence, but did not send methodology used nor details of findings


And the winner is - Daniele Murrau

Here are his conclusions and methodology as a downloadable PDF

We are also naming two honorary mentions

  • For speed - Lawrence Woodman, who produced a nearly full analysis in a tremenduosly short time, but most probably missed the OSSTMM and the metasploit remnants because he was in a hurry
  • For thoroughness - Lesky D.S. Anatias, who discovered ALL evidence, including the metasploit remnants

Related posts
Competition - Computer Forensic Investigation
Tutorial - Computer Forensics Evidence Collection
Tutorial - Computer Forensics Process for Beginners

Talckback and comments are most welcome

No Privacy - Saw You Cheating on Image Search

What is the next big privacy issue? Image Search. But not the current image search, which actually searches through the file names and meta data, but actual, pattern matching image search.

The issue of pattern matching between images regardless of perspective and color has been an academic issue for a long time, and has found application in OCR systems, fingerprint identification and some high cost expert systems. For the enthusiasts, here is a good article on the math behind image search Bayesian geometric hashing and pose clustering .

While the technology has been in research for more then 20 years, the current trend is turning towards image and video search, not for academic reasons - but for profit. Paul Murphy did a critique on the current state of search and the golden opportunities .

Yes, matching an uploaded image to a database of images and videos and returning similar items is a very valuable and profitable technology - just imagine the amount of commercials that can be targeted in such a way!

So it is safe to say that with the current advances in processing power, storage and network bandwidth, image search will happen, quite fast. It will probably deliver a lot of benefits apart from profits for the search engines, like

  • Pattern matching for obscure symbols or painting styles across many publications and museums
  • Searching for your lost brother on the Internet by uploading his child image
  • Even in kidnapping cases, for searching across the vast data sets of video surveillance in hotels, train and bus stations, airports, etc..
But it will also enable a huge amount of privacy breaches and dangerous situations, like:
  • Jealous girlfriend/boyfriend may use the search to sift through MySpace and YouTube videos of parties looking for possible indiscretions of the partner
  • Sexual deviants may use the online video and image archives to search for their preferred type of targets
  • Criminals will be able to look for a multitude of photos and blueprints of a possible target (a local bank building) by having only several photos and a sketched schematic of the publicly accessible part of the building
  • Identity theft attackers to find actual persons the target is working with or being familiar with, to prepare a better attack
We are becoming a very networked world, with direct and online access to ever vaster set of information.
So just be prepared to tell the truth to your wife when you come home from work, because soon she'll be able to Google you at the local bar with friends instead of a late night at the office


Related posts
Internet Social Engineering - Avoid Con Tricks
8 Tips for Securing from the Security expert
Risk of losing backup media - real example
8 Steps to Better Securing Your Job Application

Talkback and comments are most welcome

When Will Your Mobile Phone get Hacked?

With the price reduction and the improvement in technology, the mobile devices are the next big communication platform. But also, they are the next big hacker target.

The history

Starting with WinCE, Linux and Symbian the trend of "computer-like" mobile phones just started. Yes, these platforms had their flaws and security problems. But at the time of their appearance there were two mitigating factors to an all-out attack or exploit

  • The devices had only voice and very low speed data capabilities at high prices - very few people used their devices as more then an electronic address book, and surfing the web was out of the question given their technical capabilities and data transfer prices
  • The devices high price prevented most people from owning them - again, this reduced the attack deployment and spreading capability so an attack vector on them was easily quenched.

The present

Enter iPhone, or as many users called it, the "Jesus Phone". Suddenly, everyone wants one, and Apple has happily sold more then 10 million units worldwide.

Oh, and the business ideas of Steve Jobs to lock the iPhone helped to develop a very powerful user and hacker community, suddenly information on exploiting techniques were shared between enthusiasts.

To fight on the market, everyone and their mother produced an iPhone killer - both in interface and in functionalities.

With hot spots and unlimited data plans all over the place, people are using these devices to read their e-mail, surf the web, even download and upload files.

Does anybody see a resemblance to a laptop?

The future

Enter Android - smart phones will become cheaper! The open platform concept ditches the "Security by obscurity" element, so now a lot of people will have a look into the vulnerabilities of smart phones.

In the war for customers, the providers will offer more and more hot spots and cheaper data plans.

At the moment, I'm turning off my iPone wireless, since it cannot reach a hot spot. In a year, probably my data plan will be such that i don't care whether I'm online or offline. So I'll be online! And there will be millions of users like me, and all of them can become potential targets for hacker attacks.

The effort to solution

One should expect that security becomes a great part of platform development. Android security is already lacking and they are trying to fix it .

But it's not only the android that should be treated as such. Windows Mobile, Symbian, Darwin... ALL should treat terminal (mobile device) security as a crucial part of the platform development.

This goes for the manufacturers that will be using these platforms to create their handsets - at the end of the day, nobody will say that Android was hacked, instead, Nokia, Motorola or HTC will be hacked.

And so far this element of security has been often forgotten or ignored by the manufacturers

So, in summary, I'm expecting your mobile phone to be hacked in the next year. I'll revisit the topic then, to lament on the past


Talkback and comments are most welcome

Where is that XP Install CD?

Today, Christopher Dawson has a post at ZDnet titled Don’t downgrade me to XP!. His take on the Vista subject is that we should bite the bullet and go with Vista, since XP is already 7 years old, so installing it on new equipment and running it for 4 years will bring it to an age of 11 years - way too much in an industry where anything older then 4 years is ancient!

But turning back to reality, let's analyze who might benefit of using Vista instead of XP

First the proposed benefits: Apparently, Vista has

  • better security
  • better application support
  • is more modern and far easier to use.
The users have already said their part:
  • Vista and XP are on par at security, the only remaining benefit being that XP support is ending.
  • Application support in vista is lacking, and a lot of drivers were funky even 1 year after Vista was released
  • The interface although modern, is a huge resource hog, and hampers a lot of users

So, who will benefit from Vista?

  • Not the corporate users - corporations are riddled with legacy applications, have very stringent procedures for upgrade and are generally very careful when adopting anything. In such an environment, implementing Vista will require
    • additional training for the users
    • significant testing to verify that all corporate applications are working
    • big chunk of change to bring all available hardware up to Vista hardware requirements
  • Not the power users - power users have specific applications they use and they expect that the apps run as fast and as smooth as possible. Installing Vista will very probably:
    • reduce performance of their application
    • possibly hamper operation of their application
    • make them re-learn part of their computer use - which takes time that they can use much more productively
  • Not the gamers - Unless insisting on DirectX10, XP still delivers a better performance bang for the same buck of hardware, which is very important for gamers, since they are on the road of draining every last frame per second from their hardware. Some of the older readers will remember installing special memory managers to take maximum advantage of ALL computer resources. Users like this DON'T WANT a resource hog like Vista

In summary, although XP is 7 years old, Vista hasn't delivered any significant improvements which would justify it's use. XP still delivers much better productivity

So the only ones that will take up Vista are the ones that really don't mind productivity changes:

  • Newbies - anyone just starting out in computing, so they don't have any specifications and expectations to meet, nor are particularly oriented towards any specific application.
  • Testers - the people that must have it, in order to prove that their product works with Vista
  • Technology enthusiasts - the people that want and need to have the latest and greatest product, whether to learn it or to show off.
  • Low power computer users - any users that use most basic computer functions like word processing, simple spreadsheets, e-mail and calendar and web surfing.

The above list translates to home users and Quality Assurance and parts of R&D departments. Sorry Christopher, but even after 7 years of use, XP still looks much better then Vista.

Reality check: We WILL Move to Vista once XP support has ended and the next major flaw is found. But, in the meantime, I just got a new laptop...Where the hell is my XP Install CD?

Talkback and comments are most welcome

Is the Phone Working? - Alternative Telephony SLA

Telephony costs are one of the main targets of cost cutting in many large companies. In this effort, the companies are turning to alternative voice providers, who offer much cheaper calls and more flexible services. But, these new operators are using new technologies and are relatively new on the market, so the buyer should approach the alternative telephony service with care and apply proper Service Level Agreement.

What we are used to?
In a traditional telephony, the voice reliability is taken for granted, and all equipment is designed to offer very high availability. Also, capacity is not an issue, since each incoming circuit to a switch is dedicated, and the switching capacity of the Telco Switch is calculated via well known formulae (Erlang models) to provide switching of all initiated calls.

PSTN availability was measured at 99.99% (maximum of 4 minute outage per month, or a total of 52 minutes outage per year!) in 1993 and that number is closing to 99.994%. Compared to this, classical IP data services are struggling with passing the "two point five nines" (99.5%) which is equivalent to 3.6 hours outage per month or nearly 2 days per year.

For all medium to large businesses (especially in operating a retail business) telephony is a "default" service, one that must ALWAYS work, one that is really taken for granted.

The potential challenges with an alternative voice provider
When a company decides to use the services of an alternative telephony provider several issues may appear. The alternative telephony provider may bypass the ILEC operator (Incumbent Local Exchange Carrier) to minimize costs, and quite often, they may arrive at your premises via a data link to attach to the company's PBX. Once we walk into the realm of data transfer, things get much different:

  1. The data link is terminated on a lower reliability active equipment (usually router or L3 switch) - To mimimize costs, this device will not be of a too high class, and it's hardware reliability will be around 98-99%
  2. The data link can be prone to faults on a physical level - alternative telephony operators are not too big on infrastructure protection and want fast deployment, so it can happen that the operator's cable is strung on power lines, placed in central heating ducts under the city, or in extreme examples, are even illegally dug-in in soft ground areas (parks, recreation tracks, green patches) where they are unmarked and easily fall victims to any other construction or renovation activity.
  3. Data links are by default based on best effort technologies - so IP data packet drops, retransmissions and delays can occur.

All this translates to a whole new ballgame in terms of controlling the services offered by your alternative voice service provider.

Establishing proper criteria for service quality

So in order to properly manage the alternativ voice services, one must define what criteria should be measured.

  1. Keep the good old data SLA - this is to control the overall data link quality, which is easiest to measure
  2. Establish measurement on Established, Failed and Dropped calls - via the router infrastructure connecting you to the alternative telephony provider. This measurement will be enabled through vendor specific router functions, most often through syslog event analysis.
  3. Define the guaranteed volume of simultaneous calls that the provider will deliver - measure the delivered volume of calls in terms of comparing the values of established, failed and dropped calls from point 2.
  4. Define and Apply penalties both on overall link quality (point 1) since it will affect all calls, and on volume of realised calls (points 2 and 3) since they relate to actual ability to use the service as contracted.
Related Posts
9 Things to watch out for in an SLA
5 SLA Nonsense Examples - Always Read the Fine Print

Talkback and comments are most welcome

System Management - When do the IT Admins Screw Up?

The main purpose of IT within a company is to provide IT services to the business. This means that the responsibility for availability, response time, and service quality rests mostly on the shoulders of IT admins.

In most cases IT personnel understand the burden they bear very well, and are extremely careful in their daily activities. But if certain processes and IT culture are not in place in an organization, system admins can cause disruptions.

Here are the conditions with real life examples under which an IT admin can screw up:

  1. Lack of Proper Testing and Contingency Planning 1 - A corrective update batch process was run on the CRM system. The admin started the process at 9 PM without to complete overnight and left it without supervision. The process ran until 5 AM, when it failed and the database began rollback. The rollback took another 8 hours, incapacitating the companies CRM until noon the following business day.
  2. Lack of Proper Testing and Contingency Planning 2 - During database maintenance, several large tables were moved directly to archive and recreated as empty ones manually. The system ran well for 5 days, after which each operation became very slow or could not be performed at all. A simple analysis concluded that the during the archive and recreation process, the indexes were not recreated on the newly created tables, thus forcing the database to do a full table scan for every operation. Since the tables were empty, this did not become an immediate problem.
  3. Lack of Coordination and Communication - A clustered mail server exhibited errors in mailbox processing. Two administrators were called in to remedy the problem. The first administrator initiated a mailbox rebuild process. 10 minutes later, the second admin instructed the cluster to fail-over the mail server resources on the other server. The rebuild process crashed and corrupted the entire mailbox pool, which had to be restored from backup. All received emails after the backup were lost.
  4. Not following procedures - The corporate web server sent an alert of low disk space, so a system admin searched the disk for items to delete. He found a folder "Copy of wwwroot" and assumed that it is a copy of the web server root directory. He deleted the folder and all sub folders thus creating free space. 5 minutes later, the manager called to report that their corporate web site is down. Another admin assisted web development in placing a new version of the portal the previous day, and they placed it in "Copy of wwwroot". Luckily, the old version was still available a temporary version of the portal went up in 10 minutes.
  5. Direct training or testing on live environment - A newly hired administrator was given access to administrative passwords. Since his new job would require to administer routers, after work he decided to try some router commands. He chose to connected to a router whose IP address was commonly mentioned, logged on and started typing basic commands, specifically the 'show' command set. He also used the abbreviated version of 'show' - 'sh'. He got braver and entered an interface configuration, and typed 'sh' again, and pressed enter. The router complied and did not return anything. What the admin didn't know was that at interface level 'sh' means 'shutdown'. The Internet link of the company was down for 2 hours until a senior admin brought the interface back up.

Related Posts

8 Golden Rules of Change Management

Talkback and comments are most welcome

The call records theft - security of batch processing

Batch processing is most often overlooked during any security analysis. The main reason is that batch processing operates on millions upon millions of records at a time, and does that at a very fast rate. The second reason is that batch processing usually functions as a 'black box' with little input or intervention of the users, so it is easily forgotten by Security Officers.
But batch processing programs can contain a very dangerous covert code which if not investigated would go unnoticed for years

Example scenario

One of the largest batch processing systems in the world are telecoms billing systems. As an introduction, here is the generic process of billing in a Telco environment

  1. The call data is recorded by the Telco switches in specific records called Call Detail Records (CDRs). They can be sent online to billing but in most cases for speed and redundancy are simply saved in files
  2. The CDR files are then transferred to a Mediation System.
  3. The Mediation System is a conversion program that knows how to read the file formats of each switch manufacturer and version (Nokia, Siemens, Ericsson, Nortel...) and to convert the information in each CDR into a consistent and unified record format for all calls in the network.
  4. The converted unified CDR records are written to the central billing database.
  5. The billing software reads through the CDR records in the billing database, identifies each originating phone number and his owner and applier tariffs and discounts to the call.

Steps 3 and 5 are usually batch processing programs, that run at least once a month, but is usually run every week or every fortnight, to distribute the overall processing onto smaller chunks.

A programmer or engineer with malicious intent can insert a covert process in either the mediation or billing processes which can:
  1. Modify CDR's to reduce his or other costs or transfer costs to other owners
  2. Modify CDR's to erase records of calls being made
  3. Search for and collect specific information for calls made to and from a telephone number for blackmail, sale or publication purposes.

The points 1 and 2 are very well addressed in a Telco environment, since they impact the income of the operator. Therefore, fraud and internal audit departments are always hunting for any indication of billing data modification, and are constantly matching the resulting billing to CRM trends and reports from traffic analysis software.

Point 3 however is poorly addressed, since information leakage in such an environment is difficult to identify, and since the impact of information leakage is not directly visible on the bottom line, it can go on unnoticed for years.

Here is a detailed attack analysis

  1. A malicious attacker would install a covert process into the mediation or billing system.
  2. He would issue a command to the covert process using ping, e-mail, or simply send a patch with a change in config parameter as a command to sift through the processed data and to make simple copies or summaries of certain information, by originating or destination phone number.
  3. The copied information would be hidden in a file, possibly even encrypted, and can be sent out using any number of covert channels like:
  4. Encapsulating the information in slow rate ping payload and sending it to a Internet server
  5. Encapsulating the information in slow rate ping payload and sending it to a compromised client on the network, from where it can be sent further via ping, email or http.
  6. Burying the file somewhere on the computer drive to be copied during a patch or maintenance visit

There is an excellent market for this information, ranging from private investigators collecting information for a client to criminal groups preparing a blackmail.

Security Verification of Batch Processes

The process of verification of batch processes should be done at least once a year and at any time a possible breach of confidentiality is identified. The the process below is a generic one, and can be applied to any batch process.

  1. Enumerate batch processes and map them against manufacturer technical documentation to identify expected triggered processes.
  2. For each process use Process Explorer (for windows) and lsof and strace/truss (for unix/linux variants)
  3. Set a temporary full file access audit to review which files are being touched during the process - the results will probably be enormous, but can yield additional insight.
  4. Run a passive sniffer running on the LAN interface of the investigated server to identify where is the information travelling and in which types of packets
  5. Run the process both on test systems and on production, the covert code can be set-up to run only if operating on production to avoid detection - billing and mediation servers change rarely, so the covert process can be configured to enable itself ONLY if the MAC address of an interface is set to something, or the IP address is set to some value etc.

The verification process is very long and difficult, so be prepared for a lot of screen staring!

Related posts

8 Tips for Securing from the Security experts

Talckback and comments are most welcome

Tutorial - A Poor Man's Secure USB

USB Flash thumbdrives are efficient, large capacity, fast and very resilient. So everyone uses them for transport of files, and very often for transport of corporate documents. But USB thumbdrives are also very easy to loose and steal. Naturally, there are secure USB thumbdrives, but their price may not get approved by management, especially if the company purchases a large number of thumbdrives.
So, in order to maintain a high level of security while maintaining the same level of your budget, here are two tutorials on how to create a secure USB.


I. Security For Usage

If the user will use the thumbdrive to transport documents and will use them at unknown locations and computers, you should create an encrypted virtual volume in a file on the thumbdrive.

Here is an excellent tutorial on how to create this encrypted volume, but with the following modifications:

  1. Prior to creating the encrypted volume, format the USB thumbdrive to clear all previous content.
  2. The file size of the virtual volume should fill the ENTIRE FREE SPACE of the USB thumbdrive - this way a lazy user cannot copy something into the unencrypted space, since there is no unencrypted space.
  3. The tutorial gives instructions on how to create a autorun file, which is deprecated, since the TrueCrypt wizard will create this autorun for you.
  4. Set the truecrypt files Truecrypt.exe, truecrypt.sys and truecrypt-x64.sys as read-only, to prevent accidental deletion of those files. Naturally, you cannot make the actual volume file read-only, since you need to write to it.


II. Security For Transport

A much higher level of security can be attained if the USB thumbdrive is used only as a transport of files between known computers. (For instance, the office PC and the home PC of an employee)

For such a home worker, the process of creation a USB Thumbdrive is almost the same as under I. Security For Usage , with the following difference:
  • When creating a volume password, check the Use keyfiles option, and then choose Generate Random Keyfile and save the file under an arbitrary name.
After completion of the format, the administrator should place the keyfile to both the office PC and the home PC of the user of the USB Thumbdrive. To do this, the administrator should use another media (a CD-ROM or another thumbdrive).

With this process, in order to decrypt the encrypted volume, the user needs two things: the password and the keyfile. So even if the USB thumbdrive is stolen and the password is known, nothing can be done without the keyfile.

Naturally, this is not foolproof. The home computer security must also be taken into consideration, and these computers are usually not too secure. Once the files are decrypted on the home computer, they can fall prey to possible trojans or spyware that got into that computer via the internet. So a very prudent measure is to pair this implementation with a corporate license of Antivirus/Antispyware and Firewall on the employees home PC.


Conclusion

These processes improve the security of files stored on a USB thumbdrive, but since the software is designed as single-user each encrypted volume has a single decryption password, so only one person should use it.

Related posts
6 steps to securing your backup media
Be Aware of Security Risks of USB Flash Drives

Talkback and comments are most welcome

Real and Bizarre Information Security Situations

Information security has a lot of flaws and errors. Some of them are caused by persons, some by technology. And most of them are so flagrant, that no one would believe that they are possible. Here is a list of the most bizarre but real situations in information security that I encountered during the years of my work (naturally, everything is anonymized):

  1. An organization had a secure site where the log off button simply navigates the visitor off the main page, but does not tear down or in any other way disconnect the session. Until the browser is not off, you can use the back button to go back and continue work with valid credentials.
  2. A security savvy user sent an encrypted file via public e-mail , and then sent the decryption key in another clear text e-mail, with a subject line: "Password".
  3. A user decided to rip music from an audio CD at work and send the rips to her private e-mail. Instead of converting it to MP3, she simply copied the AVI files into an e-mail as attachment, without checking the file size, and sent it. The corporate mail server got clogged up and did not operate for 30 minutes. The first song in the mail was: "Every little thing she does is magic". The employee was repremanded.
  4. There is a highly confidential document safe with the combination written on the top corner of the door. When asked about this, the custodian of the safe explained that the lock is "funky" so even with the known combination, only he know how to twist the dial into opening the safe.
  5. A consultant was hired to analyze database performance. He wasn't given passwords and had to communicate all his requests for reconfiguration to the DBA. The consultant considered this approach to be too slow, and at one point used a sniffer to capture the password, apparently in order to work faster.
Have similar experiences?
Please share in the comments

Competition Software Testing - Benefits and Risks

Testing of any solution, especially software is a very slow and painful process, which requires a lot of human resources and proper design of test scenarios. Because of the slowness of the process, something can be missed.

So a number of companies organize competitions in which they offer rewards to whomever breaches the security, finds a bug or similar activity to their software. Jon Oltsik in a text titled Carnival atmosphere in security compares this process to carnival and criticizes it heavily.

Although I agree that the competition is not a good approach, here is a more constructive analysis of the reasons:

Benefits
Here are the perceived benefits of a competition style testing of any software. All of them are naturally legitimate, and every company would like an army of very dedicated testers for their product, at a price at which they'll never be able to hire so many testers:

  • The application will get stress tested - An army of testers hunting the prize will put the software through it's paces and make very creative use cases to reveal bugs.
  • A lot of boundary conditions will be checked and re-checked - These bug-hunting testers will throw all kinds of garbage at the software, precisely where most applications fail and open a way to fraud, security breaches or simply erroneous operation.
  • Implementations of standard protocols and algorithms will be checked for errors - the most frequent path to breach of application or system security is the poor implementation of the standards that are trusted the most. For instance, while AES encryption is virtually unbreakable within an acceptable time frame, it's poor implementation in program code can lead to easy exploits and breaches. Such errors can be identified in a prize hunting test.

Drawbacks
Naturally, it's not all good. There are several risks to using a competition for software testing, and here are the most very dangerous pitfalls:
  • Only one winner - Only one vulnerability. This method of testing will actually identify only one vulnerability. There can be several controls to prevent this situation, like a submission time after which a winner is declared, but usually the first hacker to perform the breach will use covert channels to advertise his success, to deter other competition and to increase his reputation.
  • Bugs publicized in the wild - A lot of other bugs and potential errors can be identified during the test, but these will not create the effect to win the prize. After the competition, information about these bugs can suddenly become available "in the wild" and the software company will not be aware of them. So a lot of people will get their hands on a list of bugs and can attempt some attack with them.
  • A lot of potentially malicious groups can become very familiar with the software- In order for the competition to have any effect, the software needs to be distributed to everyone and their mother, no questions asked. This means that a lot of teams will learn the inner workings and the processes of operation of the application. In a standard test and delivery process such a corporate scale application, the general public does not know how to use it. While this security by obscurity is not too effective, it is another deterrent against attempts at fraud or attack.

Conclusions
There are visible benefits and a lot of good PR for anyone organizing a competition testing of it's solution. However, in the long run, the majority of valuable results of the tests will not be returned to the organizer of the competition, so there will be little benefit when the prize money is spent to close only one vulnerability.

So, unless you are up for a publicity stunt, when organizing a test, approach the process as a reward based test, but under controlled conditions (your testing facilities, scenarios to be filled and submitted for each bug found, etc.) and then reward the testers for each found bug.


Related posts

Information Risks when Branching Software Versions

3 rules to keep attention to detail in Software Development

8 Golden Rules of Change Management

Application security - too much function brings problems

Security risks and measures in software development

Security challenges in software development


Talkback and comments

Corporate Security - Are the hackers winning?

Recently, i read a discussion claiming that the corporate security is loosing the war to hackers, and quite soon corporate systems will crumble under the attacks. Here is an analysis of security positions of both hackers and corporations.

Analysis

I. Corporations
A corporation addressess security in a systematic and planned way, always being careful to reduce costs and avoid using resources on non-profitable investments. Also, A large company security position is hampered by:

  • Investment cycles - capital expenditures are usually planned ahead for 1 and 3 year period, and the corporation needs to plan the funds for corporate security, while not really knowing what will happen
  • Strategic orientation - due to specific corporate strategy like heavy investment, cost cutting, security may be left aside, and get delayed in development
  • Vendor and product interoperability - anything set-up within a corporate environment must be tested to not break anything else and to enable good interoperation. The testing cycle is usually a month or more long, with delays if significant problems are found
  • Human resources usage - again related to use of resources, the security personnell are also assigned other tasks, like project management or some form of legal or compliance work, which shifts their focus to other priorities and can cause them to miss an event
Let's review a typical process of implementation of security infrastructure upgrade in a large corporation:
  1. A necessity for security upgrade is identified - duration: 1 week
  2. The CISO/CIO/CTO and their teams investigate the ability of the market - duration: 1 month
  3. The CISO/CIO/CTO prepare a formal suggestion for the board and submits for approval - duration: 2 weeks
  4. If the board approves, budget will have to be assigned to the project, and get approved by controlling. If there is no available budget, the project will be postponed until the next budget cycle (probably next fiscal year)
  5. The CTO and CFO will organize a price negotiation order or competititive bidding - duration: 1 month
  6. Contract will be drafted and signed - duration: 1 week
  7. The technical teams with the supplier will organize Delivery, installation and testing - duration: 2 months
  8. The system will begin its life - approximately 5 months after the initial identified need
II. Hackers

Hackers employ guerrilla-like tactics. They are not hampered by systematic approaches and plans, and treat each attack as a separate case, reusing what resources they can get their hands on and abuse, including impersonation, surveillance, even theft.

Let's review a typical process of a very systematic hacker attack on a corporation:
  1. The hacker has a good idea for an attack - duration: 1 day
  2. The hacker will research the target for attack feasibility and possible approach. During this time he will do information gathering, social engineering and collect a large volume of details about the target. During this period, he may also involve partner hackers into the attack team - duration: 1 month
  3. If the attack is feasible, the hacker will research the web and message boards for a possible solution to accomplish the attack - duration: 1 week
  4. If the solution is available, he will organize a way to download it - duration: 1 day
  5. If the solution is not available, the hacker will organize to make a program - duration: 1 week
  6. The solution will get tested on a low value target, or several of them - duration: 5 days
  7. The solution will then get used in the original planned attack. approximately 2 months after the initial attack idea
Please note that this is a very systematic attack, and the majority of attacks are attacks of opportunity, which occur within 1 week of inital idea.

Discussion and conclusions
The reaction time advantage is obviously on the hacker's side. How come they haven't overrun the corporations by now?
There can be many different discussions about this, but they all boil down to:
  • resources and
  • ease of communication
Please review the following facts:
  1. For any guerrilla to be successful, it needs support. Support is given when there is a common cause. For most hackers, the cause is personal gain (financial or promotional), so the other internet population gives little to none support.
  2. The corporations while slow in a normal process, have the option to "throw money at a problem", once it becomes too problematic. This will include: immediate purchase of special equipment and software, hiring of consultants, employment of better experts.
  3. The corporations have a large system base, and while a lot will suffer some security deficiency, it takes time to hack into any one of them. So the hacker will actually need time to properly.
  4. The hackers need to maintain high level of secrecy and constantly watch their back. If an attack becomes too flagrant, hackers are viewed as common criminals, and are immediately under the scrutiny of police authorities. This limits their communication with peers to only the most trusted ones, and this circle of trust is rarely opened.
  5. A corporation with a problem can communicate the problem to consultant and partners, and simply bind them with a Non-Disclosure Agreement, which is usually sufficient to maintain corporate level of secrecy - breaching it would mean huge penalties and loss of reputation
Conclusion
While the hacker community is much more agile, it is simply hitting the small weak points of the corporate world and is careful not to hurt the corporations too hard, since hitting harder will bring the game to a whole new level where they'll be facing undercover police agents, specific and very powerful detection systems

So, the overall corporate security levels vs hackers will remain at Status Quo, with attacks and security improvements happening at regular alternating intervals.

Related Posts
Portrait of Hackers

Talkback and comments are most welcome

Business Continuity Analysis - Communication During Power Failure

As the world gets ever more hungry for power, resources are depleting while the climate is changing and large storms become frequent, power outages and massive problems on the grid all over the world will start to rise. While massive power outages will bring a lot of problems, companies will strive to continue some level of operation. And to achieve it, they need to communicate - both internally and externally. And massive power failures dictate special analysis of the telco backup resources. Here is the analysis and recommendations:

What happens to the telco infrastructure during a massive power failure?

  • Every advanced telco device not on UPS will stop functioning immediately, including: routers and modems, PBX, faxes, cordless phones, ISDN phones
  • The advanced telco devices supported by UPS will fail within 90-180 minutes after the power failure, since the same UPS is also supporting PCs and other equipment
  • The alarm systems which usually have their independent battery pack will stop operating after approximately 24 hours
  • The gsm telephony base stations are mostly supported by UPS, with only the largest ones supported by generators. Therefore, they will fail within 100-200 minutes, after the power failure.
  • The only remaining telco resources after approximately 4 hours of blackout will be
    • The advanced telco devices supported by a diesel generator
    • Public Switched Telephony Network (PSTN) lines - they are powered over the telephone line by the telco PBX, which in turn is powered by a generator
    • Islands of mobile telephony in the cells created by the Large Mobile Telephony base stations
    • Satellite communication devices, like VSAT or IRIDIUM phones - these are a very temporary solution, since they are strongly dependent on battery capacity

Although diesel generators are not expensive, companies avoid them for all except the largest company locations for the following reasons:
  • installation brings a wealth of problems for companies, since they need approval from fire inspectors,
  • the company must adhere to safety and pollution regulations to install the generator
  • maintenance costs cannot be ignored, especially when the normal grid is
  • the diesel generators can become unreliable in very hot or very cold days
  • generators can become dysfunctional due to neglect or external influence, for instance, the other company sealing off the exhaust pipe during remodeling

Recommended Measures
  • Place diesel generators at all locations where it is possible - don't go overboard, just use a small device with 6-8 hours of anatomy and internal tank. After 10 hours of operation, you can create a controlled shutdown for a refill.
  • Have dedicated "red phone" PSTN line at each location or several of them attached to a simple phone device (with no external power requirements) , which can be used during normal operations, but which will become the primary means of communication during a longer period blackout.
  • Include the threat in your Business Continuity Plan (BCP) and define proper steps to be taken in case of occurrence
  • Test the BCP with the power failure scenario.

Naturally, the measures are simple and well known, and naturally, few managers will accept the first two until the first power failure event.

But the Business Continuity Manager can do the following: Create a
BCP test scenario in which it will be forbidden to communicate via any advanced telco devices, and present the results of the BCP to Management. The results will not be good, so be prepared to take the fire!

Related posts
Example Business Continuity Plan for Brick&Mortar Business
Business Continuity Plan for Blogs
Example Business Continuity Plan For Online Business

Talkback and comments are most welcome

Template - Software Acceptance Testing

Software testing is becoming a very mature area, even has a formal name - Software Quality Assurance (SQA). SQA is part of the software manufacturing process, and nearly all software manufacturers have this process integrated in their production process. Furthermore, this is advertised as a market strength of the manufacturer.

However, a lot of software manufacturer SQA processes cannot replicate conditions existent at the real user, some may even be unawhare of them. So, if the buyer of the software unconditionally trusts the SQA process of the manufacturer, he may find himself exposed to risk that the software still does not perform according to expectations.

Here is an acceptance testing model that companies can apply to verify what the software they purchased is performing to expectations.

Acceptance testing should not be done only when purchasing a piece of software. Instead, it should be integrated into change and release management as functionality test. This test should prove that the new versions deliver the promised changes, and that all current and used functions are maintained and bug-free.

To perform a proper acceptance test the organization needs to

  • Define which functions are tested - all new functions, and ALWAYS include the current business critical functions for re-test. New versions tend to creep-in bug at places that worked perfectly, so re-testing critical functions will just keep you safe.
  • Define who performs the tests - The testers should be subject matter experts for the specific functions. These people have seen the shit, and know how it stinks. These people will know best how to create a proper test scenario and preconditions
  • Define the scenario of testing each function - Always define what scenarios will test the function. Include both normal operation activities as well as boundary conditions, where the testers will try to break the software. But since SQA should have weeded out the boundary conditions, focus more on normal operation.
  • Make a detailed log of the test - the log helps a lot of people: the software manufacturer to recreate the error, the testers in the next round to see what did and didn't work under which conditions, the auditors to review the process. So ALWAYS DO IT.
If the test is successful, the IT team needs to log the entire release process into production. Their log needs to record all activities performed, on the system, databases, and even in the application if this is performed.

While the software is different for any organizations, the process is nearly the same.

Here is a template for the Software Acceptance Testing Log the organization can easily adapt for local use

Related posts
3 Rules to Avoid Problems due to Changes in Development

Information Risks when Branching Software Versions

3 rules to keep attention to detail in Software Development

8 Golden Rules of Change Management

Application security - too much function brings problems

Security risks and measures in software development

Security challenges in software development




Talkback and comments are most welcome

Stopping a Corporate IT Infrastructure in a Single Blow - are you safe?

A corporate computer infrastructure is a large system, and one that is fairly resilient and made to last. After all, there are backup links, redundant servers, replication technologies all over the
place. And yet, there is a way to temporarily incapacitate an entire corporate windows infrastructure with a properly delivered blow, simply because it relies fully on an often ignored service - DNS.

NOTE: This particular post has NOTHING to do with the recent DNS vulnerability craze. The vulnerability just adds another vector of attack, but an attack can be performed even without this vulnerability.

Back to the topic at hand, let's review how many services DEPEND on DNS:

  • E-mail service - relies on DNS to deliver e-mail destined for other domains - no DNS, no email sent to anywhere
  • Corporate applicatons - rely on DNS to resolve application and database server names - no DNS, no core apps
  • Active Directory - is entirely dependent on DNS, to look-up Global Catalog, register srv records, lookup active directory records. If the DNS is down, even the domain controllers will stop proper operation. - no DNS, big problems in logon and management of windows Active Directory
  • Network Access Control (NAC) - depends on DNS to discover it's policy and update servers - no DNS, big problems in element authentication
Also, DNS has the following characteristics that make it even more vulnerable
  • DNS servers need to be open to all corporate users - All clients need to communicate to the DNS servers, to perform lookups for their services
  • DNS servers IP must be known so they can be used - no hiding behind names, DNS servers are published to all clients as IP addresses
  • DNS server works with minimal or no maintenance - when was the last time you checked your DNS servers? When was the last time you checked your client's computers to see how DNS is assigned (DHCP, Manual, hard coded)
Attack scenario
An attacker can insert a bot into corporate client computers, by apple dropping, sending a malicious mails or hiding in games. The bot can be set-up to receive a remote command or just be a logic bomb, to start a DoS attack on corporate servers.

EFFECT: A proper attack will slow down the DNS response to a pace where 90% of all queries will result in a timeout. As a bonus, the links will clog-up with bogus traffic, thus preventing corporate applications on the client computers from any communication.

A good time for this attack is start of business hours, because even IPS systems have a trend that expects a lot of DNS traffic then, and will not react properly. This also goes for IT teams

Naturally, this attack is not straight-forward nor easy to do. It requires
  • coordination and social engineering to collect information
  • access or trick to install the bots on corporate clients
  • a properly programmed bot to bypass detection by antivirus
However, just because this is not easy, does not mean that it's impossible. This attack can be used as to diminish a corporation's reputation by creating difficulties in operation, or as a diversion, so the majority of system admins will be focusing on recovering basic communication, while a more sinister attack is in progress.

Controls and Countermeasures
While there is no single foolproof defence, the following controls will mitigate such an attack.
  • Have at least 1 cold backup DNS server - this can be a virtual machine, but offline, and with unpublished IP address. If all other DNS servers are under attack, this computer should be brought up and assigned as DNS to most critical clients, to achieve minimal operation.
  • Have dedicated DNS servers for server infrasructure - these DNS servers should not be accessible by other corporate clients, thus even if a bot attacks the client accessible DNS servers, the server infrastructure will continue operation.
  • Set-up DNS through DHCP for ALL client computers - in case of an attack, it is much easier to reconfigure a DHCP server and ask everybody to reboot.
  • Have an IPS system on the entry/egress point of traffic from clients to servers - the IPS can be of great assisstance in analysis of an attack, and should be configured to send alerts upon breach of trend.
  • Do not allow DNS traffic from the internet - Internal DNS servers are for internal use. If you have web and e-mail service, outsource a minimal DNS serves hosting to an ISP provider for these public addressess. This way the attackers from the internet cannot harm your network - your exposure is reduced.


Related posts
Check Your DNS Zone Transfer Status
DHCP Security - The most overlooked service on the network

Talkback and comments are most welcome

Business Continuity Plan for Brick & Mortar Businesses

Just as Business Continuity Plan for Blogs covered the activities for Business Continuity for a very small online business, The BCP is much more important for standard everyday businesses.

As a continuation of the series of Business Continuity Plan examples, we are happy to present a BCP for "Brick and Mortar" businesses. This example BCP is modeled after a mid-range accounting business, and it is easily adapted to any office based business.

The Incidents included in this BCP are

  • Fire
  • Flood
  • Earthquake
  • Employee Illness - Epidemic
  • Strike blocking transport routes to site of business
Also, the BCP includes elements which are applicable to a multi-person organization, like chain of command, locations of alternative resources and communication plans - All of these need to be in print and all employees need to be aware of them for proper BCP execution.

You can download the Example Business Continuity Plan for Brick and Mortar business HERE

Related posts
Business Continuity Plan for Blogs
Example Business Continuity Plan For Online Business

Talkback and comments are most welcome

Designed by Posicionamiento Web