Software Response Evaluation Methodology

One of the most important characteristics of corporate software is response time (AKA speed). And it is quite difficult to achieve, since all corporate software solutions are multi-user, and operate on very large data-sets. Of course, everyone would like to have every action return instant results, but that's impossible. Here is a methodology by which a company should set application response time targets and evaluate the software against them.

Because delays in response at some if not all points in a software are inevitable, one should have a realistic stance about it. So, what is the required response time of an application? - It is the amount of time that the user CAN wait, not the amount of time the user WANTS to wait!

Here is a methodology to evaluate acceptability of software response.
The methodology has 6 distinct phases:


I. Identify which functions will be using this software. For the purpose of example, these functions use the evaluated software.

  1. Customer Relations Management
  2. Direct Sales
  3. Service Provisioning


II. Identify activities in each function which will be using this software
. Enumerate the actions performed by the above functions, and add a row with the following information for each action
  • Function - The name of the function that is the owner of the action
  • Action - The name of the perfomed action - as a busines activity
  • #Times Per Day - Number of times this action is performed per work day by one employee
  • Avg. Perf. Time - Average duration of each performance of action (in seconds)
  • Des. Perf. Time - Desired duration of ech performance of action (in seconds)
  • #Users - Number of persons performing this action during their work day
The filled table will show you what activities will happen on the software during a typical work day, and by how many people. This is essential for a realistic evaluation

The Avg. Perf. Time will give you the maximum expected response time, and the Des. Perf. Time will give you the optimal response time of the software for that action.

NOTE: You may want to reduce the received numbers by 25% for the evaluation, since all software packages tend to gradually slow down during usage, and this will give you breathing room. This reduction is a decision of the entire evaluation team and must be decided per project.


III. Identify the cutoff point for the test activities and acceptable variation of results
- With the table properly filled, you have a set of realistic usage tests for response evaluation. Actually, the properly filled table will have a huge number of activities, so you need to set a cutoff point - which set of activities will simulate a real situation, but without going overboard.

Usually, you should discard actions that are infrequent (1-2 times per day) and don't have more then 5 minutes as Avg. Perf. Time, as well as actions that are deemed less important. This is a cruel part, and is best done with department managers

Also, you should define what is acceptable result. It is unrealistic to expect that the results will actually be 100% match to your targets. An example acceptable variation would be:
  • at most 20% of the performed actions during the evaluation are above the desired response time but below or equal to the average response time.
  • at most 5% of the performed actions s during the evaluation are above the average response time


IV. Prepare real amounts of data - A common mistake of the software developers is that they test their systems on a laboratory set of data, which is usually far from the real situation, both in volume and in quality. Evaluation should be performed using a copy of production data, possibly anonymized for security reasons.


V. Call in the testers and make the test - With the evaluation activity set and the data-set to evaluate on, hire a testing team to run the test. The best evaluation is the following:
  • Runs an automatic test, run by programs simulating users, since they will measure the actual time of EVERY action down to a millisecond, and it's easy to analyze the results. To avoid errors, run the test at least 5 times, discard the best and worst results and average the remaining
  • After this, run a test with real human users, and task them with timing and recording each of their actions. Then average the result of this test with the statistical results of the automatic test


VI. Analyze results and make a decision - In the perfect world, the result will be pass or fail and you will buy the software or move on. In reality, you will have great response times on some actions and horrible on others. And naturally, office politics and strategic interests will interfere with a cold decision. So here is a rule of thumb approach:
  • If more then 25% of performed actions during test are above the respective average response time, return the software for complete reworking before re-evaluation
  • If less then 25% but more then 10% of performed actions during test are above respective average response time, continue the activities of further evaluation or preparation for implementation, but insist on a re-test before final purchase to reach the expected acceptable variation
  • If less then 10% but more then the acceptable variation of performed actions during test are above target response time, continue the implementation, but insist on a re-test before go-live to confirm reaching the target variation

Naturally, this methodology could be expanded and amended with other elements. But this version is quite capable of producing a very realistic results, very close to everyday working conditions under which the software will function


Talkback and comments are most welcome

Shortinfosec Content Quality Review

The second Google Page Rank update of this year just finished a few days ago. Shortinfosec just received a great bump in rating - went from PR 0 to PR 3. Not too bad for a blog that's been around for 6 months!

Although the Google PR is increased, this was not the primary Shortinfosec's target. I am not writing for Google - I am writing for the readers. As long as the readers receive value, Google can choose to think whatever it wants about shortinfosec.

In the meantime, I would like to ask the readers the following questions


Please leave a comment with suggestions (and praise if deserved)

Obtaining a valid MAC address to bypass WiFi MAC Restriction

A reader in the comments on our post Example - Bypassing WiFi MAC Address Restriction made the following comment

"# Obtain a valid MAC address that is allowed on the network - And that right there is the hard bit. Perhaps an article on that before declaring how easy it is."
First, I would like to clarify several things
  • Every hacker attack requires some amount of specific knowledge, time, effort and resources. If this wasn't the case, they wouldn't have been called hackers, they would be called - everyone!
  • it is not the goal of this site to provide step-by-step tutorials on actual hacker attack methods.
  • The presented MAC Address restriction protection is very easy and it requires the least amount of knowledge, time and resources compared to bypassing other protection methods and attack types
Now, here is an explanation on obtaining the difficult element - a valid MAC address
  • If the WiFi network allows for unlisted MAC addresses to associate and then uses some sort of egress filtering, on the router or service selection gateway, just assoicate to the network and run wireshark for 5 minutes to collect other MAC addresses on the network. Results in 5 minutes
  • If the WiFi network does not allow for unlisted MAC addresses to associate, then you can
    • Download Backtrack and burn it to a LiveCD. Backtrack supports most of modern WiFi laptop cards.
    • Boot your laptop from the Backtrack LiveCD. Run Kismet, which will put your wireless adapter into monitor mode. Use airodump to collect packets for analysis and find valid MAC address - Results in around 3 hours
  • Create a small Perl program to generate a cycle of possibly valid MAC addressess and cycle them on your WiFi card using macshift. This yields best results paired with a bit of social engineering - to discover the models of laptops connecting to the network, thus reducing the address space to search - depending on skills and preparation, Results in 4 - 24 hours
Related posts
Example - Bypassing WiFi MAC Address Restriction
5 Rules to Home Wi-Fi Security

Talkback and comments are most welcome

Example - Bypassing WiFi MAC Address Restriction

Among security professionals, it is a well known fact that using only MAC Address restriction is useless as a protection mechanism for WiFi. But for the general publiv, this is still a popular method. This post aims to show how easy it is to actually hijack someones MAC address and bypass this restriction.

Here is the process, as used on a Windows laptop

  1. Obtain a valid MAC address that is allowed on the network
  2. Download macshift, created by one of Internet's renaissance men - Nate True
  3. Copy macshift.exe to c:\Windows\System32\
  4. Find the windows name of your wireless connection, from the Network Connections, for example "Wireless Network Connection"
  5. Open a Command Prompt(start->run->cmd.exe)
  6. Obtain your adapter's MAC address, by typing ipconfig /all on the command prompt. The result will include the MAC address of all interfaces.
  7. Type macshift VALID_MAC_ADDRESS -i "Wireless Network Connection". Here is an example screenshot.
  8. Happy surfing
NOTE: Don't forget to change your MAC to it's original value when you are done!

The process without step 1 takes a total of 5 minutes. Now, it can be argued that it is not easy to obtain a valid MAC address, here are two scenarios:
  • If the WiFi network does not allow for unlisted MAC addresses to associate, then you can :
    • Put your WiFi card in monitor mode and capture some traffic - from there it is easy to find the MAC addresses
    • Write a brute force program that will cycle the MAC address of your adapter and try to associate with the LAN. You can optimize the brute force by finding a laptop that can connect to the network and record the actual model. Then you can just cycle through half of the MAC address bytes
  • If the WiFi network allows for unlisted MAC addresses to associate and then uses some sort of egress filtering, on the router or service selection gateway, things are much easier - just run a sniffer for 5 minutes and collect all other MAC addresses on the network. Filter out the gateway MAC, and at a later time (usually in the dead of night) try them one by one.
This example is presented just as an eye-opener to the readers with less security experience. MAC Address filtering may be used as a deterrent, but only with WPA2 encryption and minimal possible range of the WiFi access point signal.

Related posts
5 Rules to Home Wi-Fi Security

Talkback and comments are most welcome

Template - Corporate Information Security Policy

Implementing an Information Security Management System within a company is not a simple process. But as all things, it needs to begin somewhere and the right place to begin is at the top.
All information security efforts should start with a strong top management commitment. This commitment is usually communicated via the Information Security Policy.

The Policy needs to be concise, easily readable by all employees and should clearly express the following statements:

  • Management is very serious about Information Security
  • All employees are responsible for and must enforce Information Security
  • Operational responsibility and guidelines for the Information Security Management will be delegated to the named persons and via the named documents
The Policy should be an internal document, available and emphasized on the intranet and if possible on the public web site.

And if you think that by now everyone should have this done, think again. A lot of fairly large organizations don't have this document created and communicated. The freshest example is the City of San Francisco, which apparently did not have a proper policy in place.

Information Security Short Takes has prepared a Template document, that you can download and use as a basis for your own Information Security Policy.

Download the Information Security Policy Template HERE


Related posts
Template to Regulate your Firewall Configurations

Talkback and comments are most welcome

Business Continuity Plan for Blogs

After the post on Example Business Continuity Plan For Online Business , there was a mail discussion with a reader about whether it's at all relevant to Blogs. Here I would like to stress a fact. The blog hosting providers have BCP plans, but to recover THEIR services, not all blogs. A lost blog may be collateral damage, since it is after all- free service.
Here is a Business Continuity Plan for Blogs - It is actually the BCP of Shortinfosec, which I am using

SHORTINFOSEC BUSINESS CONTINUITY PLAN BEGINS

Incidents

  1. Loss of broadband link communication
  2. Loss of Hosting (Blogspot down)
  3. Loss of Hosting (Blogspot lost content)

Loss of broadband link communication
Time to wait before using BCP plan - 24 hours


  • Find alternative communication alternative choice
  • Use dial-up for connectivity - Time to achieve - Immediately
  • Use public hot spot at the Mall or Cafe - Time to achieve - 1 hour
  • Use GPRS from the iPhone
  • Publish the following message, post in the hotlink spot and as a first post:

We are experiencing difficulties in publication of new content. We
will continue with publication within the next 24 hours. In the meantime, please review our Archive

Total time of minimal function recovery - 1 hour after BCP activation

Total time of full recovery - 48 hours after BCP activation

Resources

  • Charged Laptop Battery
  • Charged iPhone
  • Modem within Laptop/PC
  • WiFi adapter for Laptop

Loss of Hosting (Blogspot down)

Time to wait before using BCP plan - 6 hours

  • Find alternative host and register - Time to achieve - 15 minutes
  • Wordpress http://wordpress.com/signup/
  • Typepad https://www.typepad.com/t/app/register
  • Choose a default template and Browse to see that it works - Time to achieve - 15 minutes
  • Login to feedburner and modify the feedburner path to new RSS feed - Time to achieve - 10 minutes
  • Publish post with content below - Time to achieve - 10 minutes
Title: Temporarily Moved We are experiencing difficulties in hosting of http://www.shortinfosec.net/. We are
working to resume normal operation. In the meantime, this is our temporary
home.
Please send your comments, questions and reactions to shortinfosec _at_ gmail dot com
  • Set-up the temp blog to accept the address http://www.shortinfosec.net/ - Time to achieve - 15 minutes
  • Log-On to DNS Hosting and redirect http://www.shortinfosec.net/ to new blog location - Time to achieve - 15 minutes
  • If the blogger problem persists more then 24 hours, post new content to new blog.
  • Wait for Blogger recovery, and if required restore template and content so the original site is available.
  • If blogger is not recovered within 48 hours, post old content as archive on the new site(PDF or backdated posts)

Total time of minimal function recovery - 80 minutes after BCP activation

Total time of full recovery - 48-72 hours after BCP activation

Resources

  • Charged Laptop Battery
  • Functioning Internet access (refer to incident 1)
  • URL and account name/password of DNS Hosting Service - written down on paper, in laptop bag, also saved in laptop
  • Current Backup of Blogspot XML Template - Backup Weekly and send as attachment to two web-mail services
  • Current Backup of custom Widgets - Backup Weekly and send as attachment to two web-mail services
  • Current Backup of Template Images and Icons - Backup Monthly and send as attachment to two web-mail services
  • Current Backup of Blogspot Posts - Subscribe to Feedburner to two web-mail services - Immediate Backup
  • Current backup of Downloads section - Backup Monthly and send as attachment to two web-mail services

Loss of Hosting (Blogspot lost content)
Time to wait before using BCP plan - 1 hour

  • Login to blogspot or re-register if account is lost - Time to achieve - 15 minutes
  • Choose a default template and Browse to see that it works - Time to achieve - 15 minutes
  • Login to feedburner and modify the feedburner path to new RSS feed (if changed) - Time to achieve - 10 minutes
  • Publish post with content below - Time to achieve - 10 minutes

Title: Temporarily Moved We are experiencing difficulties in hosting
of
http://www.shortinfosec.net/.
We are working to resume normal operation. In the meantime, this is our
temporary home. Please send your comments, questions and reactions to shortinfosec _at_ gmail dot com

  • Set-up the temp blog to accept the address http://www.shortinfosec.net/ - Time to achieve - 15 minutes
  • Log-On to DNS Hosting and redirect http://www.shortinfosec.net/ to new blog location - Time to achieve - 15 minutes
  • If required restore template and content so the original site is available.

Total time of minimal function recovery - 80 minutes after BCP activation
Total time of full recovery - 24- 48 hours after BCP activation

Resources

  • Charged Laptop Battery
  • Functioning internet access (refer to incident 1)
  • URL and account name/password of DNS Hosting Service - written down on paper, in laptop bag, also saved in laptop
  • Current Backup of Blogspot XML Template - Backup Weekly and send as attachment to two web-mail services
  • Current Backup of custom Widgets - Backup Weekly and send as attachment to two web-mail services
  • Current Backup of Template Images and Icons - Backup Monthly and send as attachment to two web-mail services
  • Current Backup of Blogspot Posts - Subscribe to Feedburner to two web-mail services - Immediate Backup
  • Current backup of Downloads section - Backup Monthly and send as attachment to two web-mail services

SHORTINFOSEC BUSINESS CONTINUITY PLAN ENDS

Related Posts

Example Business Continuity Plan For Online Business

Talkback and comments are most welcome

San Francisco WAN Lockout - Pointing Fingers at Everyone Responsible

The San Francisco WAN Lockout incident is already written in the annals of IT history. I followed the development and the comments, and today i stumbled on a text Who is really to blame for the San Fran network lockout?. It does touch important issues, but leaves the white gloves on. So let's remove the gloves and point some fingers:

What was the situation?

  1. Apparently, Mr. Childs was the only person with unrestricted administrative right to manage the network, supposedly because of the incompetence of the other members of the team.
  2. The network is used to transport and manage all kinds of official documentation - including jail bookings and other law enforcement documents, payroll files, and e-mails
  3. He created an authentication scheme where only he had administrative access on the network.
  4. Apparently, the situation in points 1 to 3 was well know to the users and management, and was accepted as such.
  5. Mr. Childs clashed with the new Security Manager on the subject of authentication and control, which led to poor formal review
  6. The poor performance review and other undocumented power struggles led to the dismissal of Terry Childs and his subsequent arrest after he refused to relinquish the administrative passwords

Who's responsible?

  • Terry Childs
    • He played god and isolated all other network engineers from the network - thus preventing them from any chance to learn how to manage the network.
    • He created and to date is enforcing the actual lockout that is the reason for all this ruckus.
  • Terry Childs' direct line manager and the one level above
    • They knew that Terry Childs had absolute control over the network and permitted that - If they were uninformed of the situation, they should be fired for gross incompetence.
    • They did not create conditions for knowledge distribution and reduction of dependency on a single person (Terry Childs could have fallen ill or gotten in a car accident - they still need another engineer).
    • They did not identify that there is a potential superiority problem with Terry Childs. This superiority problem usually manifests in insubordination when the control is taken away from a person.
    • Poor human resource management - if all other network admins were so incompetent that administrative authority couldn't be given to them, why did they hire them?
  • Top management
    • They delayed or avoided implementing a security policy which Terry Childs would have had to obey.
    • They did not create no single point of failure strategy for their personnel.
  • Security Officer
    • He did not identify a risk that the employee may cause serious problems and did not propose alternative workarounds - for instance - hire the equipment manufacturer professional services to regain control and lock-out Terry Childs.
  • Entire line management
    • Poor problem management - Once it became clear that it will be difficult to regain control over the LAN, they fired Mr Childs and called the cops. This only worsened the problem, since the cat is out of the bag, and the problem is still unresolved.

So, someone in the great City of San Francisco should now go around and actually look into the work of all named here, because incident caused by Terry Childs is just the effect, not the root cause!

Talkback and comments are most welcome

ISS Increased Internet Threat Level

Yesterday Internet Security Systems (ISS) increased the Internet Threat Level to 2.

The reason for this increase is the publication of an exploit code for the DNS Cache poisoning vulnerability. Most of DNS Servers have this vulnerability unless patched with a recently issued vendor-specific patch.

Even with patched DNS servers, the threat remains under specific conditions

Details of the Threat can be read Here

Example Business Continuity Plan For Online Business

Online based businesses are 100% dependent on IT services, but a lot of them don't even consider the scenario of what will happen in a situation of IT failure of the IT systems hosting their business/service.
Furthermore, a lot of online business owners simply rely that their hosting providers will recover their services -THIS IS WRONG - they will restore the information, but not necessarily functionality!
Here is an analysis and a summary plan for business continuity of an online business:

First, a couple of definitions:

  • The goal of business continuity is to resume business operation in a reduced but controlled manner after a disaster which impacts operation - until full recovery is achieved
  • The goal of disaster recovery is to resume IT operations after a disaster which impacts IT operation - until full recovery is achieved

Requirement analysis
For large companies, the initial step of planning business continuity is the Business Impact Analysis (BIA), during which the company identifies which processes are critical to the company's survival and need to be restarted immediately, and which can be restored later.

For small online portals/services these have the following processes:
  • Service Delivery - actual service running on web and database servers
  • Service Development - design, programming, upgrading, bug fixing of the service
  • Sales and Marketing - promotion, communication with affiliates
  • Accounting and back office operations - self explanatory
To simplify the BIA process, let's grade each process with a number by which we indicate which service process to be restarted at what time. Here are the numbers and their meaning:
  • 1 - Process must never stop, immediate restart is needed
  • 2 - We can survive without this process for 1 day
  • 3 - We can survive without this process for 5 days
  • 4 - We can survive without this process for 15 days
So, for our processes, these are the numbers
  • Service Delivery - 1
  • Service Development - 3
  • Sales and Marketing - 2
  • Accounting and back office operations - 3
So, the most critical process (surprise) is Service Delivery. This process is bound with network, hosting, servers, databases. Our continuity plan will limit itself to this process and only to one incident that can impact this process. The real Business Continuity Plan should take into account multiple incidents (power outage, DoS, loss of DNS, virus)

Example Business Continuity Plan

I. Incident type - Loss Of Application and Database Data due to hosting server errors
Steps to achieve continuity
  1. Post a temporary information and contact page on alternative free hosting - Time to achieve - 15 minutes
  2. Redirect DNS to temporary information page - Time to achieve - 10 minutes
  3. Investigate whether servers are available. If not available, consult the list of alternative hosting providers that can provide hosting for 1 to 3 months - Time to achieve - 1 hour
  4. Restore latest trusted backup of Database to operational DB server - Time to achieve -1 hour
  5. Restore latest trusted backup of Web Application to operational Web server - Time to achieve -30 minutes
  6. Perform functional test of updated infrastructure - Time to achieve - 1 hour
  7. Redirect DNS to temporary information page - Time to achieve - 10 minutes
Total maximum time to recovery - 4 hours

Resources to achieve continuity
  • Temporary page prepared and available for publishing
  • Funds on credit card to purchase hosting for 1 month
  • List of alternative hosting providers which can support the application with contact information
  • Functional broadband link - alternative, direct access to hosting provider premises and vehicle for transport
  • Database Administrator/Developer available for activities
  • Web Application Administrator/Developer available for activities
  • Trusted and Stable Backup of Database
  • Trusted and Stable Backup of Web Application
Naturally, the plan must be tested that it works

This example plan is very limited (one process, one incident) but this is the general structure of a continuity plan. But for an online business, in which every second of downtime counts, such a plan may be the difference between a minor incident and loss of business

Talkback and comments are most welcome

Competition - Computer Forensic Investigation

Shortinfosec is hosting a computer forensics competition.
In the competition, you will have to analyze a submitted disk image for incriminating evidence, as per the scenario below

Scenario
The investigators suspect that the employee was doing the following illegal activities:

  • Sniffing IP traffic on the network
  • Creating back doors to his PC
  • Stole and copied a CD-ROM with confidential content
  • Downloaded copyrighted music
  • Used a specific penetration tutorial document to perform most of his actions
The investigators found his PC turned off. They performed a DD copy of the surviving partition and sent it to you for investigation.

Competition materials
Download the evidence image here (compressed as hdb1-img.rar)

Rules of the competition

  1. Each competitor should submit his summary report (indicating only the number of discovered evidence) as a comment to this post to establish time of solution.
  2. Each competitor should submit a detailed description of the utilized process of to discover the evidence in an email sent to shortinfosec _ at _ gmail dot com.
  3. All solutions must be submitted before midnight (CET) 20th of August 2008.
  4. The ultimate goal is to find one incriminating evidence for each suspicion.
  5. It is fully acceptable to submit a result with less evidence found, if you feel that there is no other evidence to be found or you cannot discover it.
  6. The incriminating evidence may be disguised (renamed, compressed).
  7. Each competitor can withdraw and resubmit a better evidence before the submission deadline
  8. You can use any type of investigative tools that you need, as long as you maintain the integrity of all evidence (proven by a SHA1 or MD5 hash). The utilised tools must be documented in the detailed submission.

Reward

  • Unfortunately, there are no financial rewards to this competition.
  • The first competitor to discover all evidence or the competitor who discovered the most evidence before the deadline will be the winner. His result will be presented as an analyzed solution on Shortinfosec.
  • Also, if the winner owns a blog or a site it will receive a separate detailed review on Shortinfosec.
  • All other submitted results, regardless of discovered evidence will be published in the results as honorable mentions, with links to their respective blogs/sites

We hope to have a good and fruitful competition

Related posts
Tutorial - Computer Forensics Evidence Collection
Tutorial - Computer Forensics Process for Beginners

Talckback and comments are for the competition

Portrait of Hackers

In order to properly defend against an attacker, one should understand the profile and motivation of the potential attackers that stand against you. Here is a brief profile of persons that are against you (you can use these profiles in internal training)

Hacker wannabes

  • Age - Younger teens, 13-17.
  • Gender - mostly male
  • Expertise level - After watching a lot of movies and knowing how to bypass the parental control on their browser, they like to think of themselves as hackers.
  • Motive - They openly brag about their abilities and hope to achieve some social popularity through their skills
  • Posture towards their skills - They openly brag about their abilities and hope to achieve some social popularity through their skills.
  • Tools - In their actual hacking efforts they rely on howto's and "for dummies" books, and usually use prepackaged and downloaded attack tools to perform their "hacks".
  • Organization - acting mostly individually
  • Threat level - LOW - because they are employing standard prepackaged tools, even automatic defences (firewalls, IPS) will deflect such attacks. All it takes is an up-to-date protection system

Hackers

  • Age - Older teens and young people (16-25)
  • Gender - even distribution between both genders
  • Expertise level - strong expertise in programming, TCP/IP protocols and operating systems. Regularly updating their knowledge through advisories and exercising on real or demo targets. Some posses good social skills (social engineering).
  • Motive - Identifying vulnerabilities so they can be remedied. In certain cases, uncovering or making available to general public of corporate secrets for ethical reasons.
  • Posture towards their skills - Very proud of their knowledge, and sharing with a limited group. They know the risks they take is present, since their targets may not always appreciate the efforts.
  • Tools - Any number of off-the-shelf products always combined with custom written flexible code, viruses or worms.
  • Organization - They tend to be organized in loose groups similar to guerrilla squadrons, but while the group works for a common interest, it's still every man for himself. Very often petty squabbles emerge in these groups and there is a large human resource rotation (some leaving, other joining).
  • Threat level - HIGH - with broad knowledge and customized attacks, they can defeat some automatic defences (firewalls, IPS). Additional levels of protection are needed, regular patching, employee education especially against social engineering, as well as good audit trail log and review.

Criminal Hackers (crackers)

  • Age - Varies from older teens to middle-age (17-45)
  • Gender - even distribution between both genders
  • Expertise level - strong expertise in programming, TCP/IP protocols and operating systems. Regularly updating their knowledge through advisories and exercising on real targets. Some posses good social skills (social engineering).
  • Motive - Financial gain through crime or politically motivated disruption.
  • Posture towards their skills - Very secretive of their knowledge, not sharing with anyone. They know the risk they take is large, and that should they be discovered their victims will go after them with a vengeance.
  • Tools - Any number of off-the-shelf products always combined with custom written flexible code, viruses or worms.
  • Organization - Can act individually or in an organized criminal group.
  • Threat level - VERY HIGH - since they have criminal motives as well as broad knowledge and customized attacks, they will use multiple criminal vectors in parallel or to support each other. They will most frequently act as customers to gain access and trust and collect information on weaknesses. To protect against them, a full collaboration of physical and IT security is needed. Also, employee education and segregation of duties assist in mitigating these attacks.

Disgruntled IT personnel

  • Age - Varies from young persons to middle-age (25-50)
  • Gender - mostly male
  • Expertise level - strong expertise in one area (programming, TCP/IP protocols or operating systems). Knowledge of other areas. Insider knowledge of systems and pass codes. Updating their knowledge of current infrastructure
  • Motive - Financial gain through crime or dissatisfaction motivated disruption.
  • Posture towards their skills - Skills are generally well known within the company. No effort to conceal them, since it's in their job description.
  • Tools - All internally available tools for their everyday work, any number of off-the-shelf products always combined with custom written flexible code, viruses or worms.
  • Organization - Usually act individually. It is very unlikely that several IT persons are engaged in the same disruption.
  • Threat level - VERY HIGH - by default they have unlimited or very broad access, so the most difficult part of the attacker job is done for them. There is no foolproof technical solution for protection from these attackers. A good audit trail which is not administered by the Sysadmins helps significantly. Also, HR and line managers must be trained to identify employee dissatisfaction and to react in time to possible negative breaches.

Related posts

8 Tips for Securing from the Security experts

Control Delegated Responsibility

Talkback and comments are most welcome

Tutorial - Mail Header Analysis for Spoof Protection

In the age where a huge percentage of all attacks are done through e-mail, very few of us know how to analyze where this e-mail was sent from. This analysis must go beyond the sender e-mail displayed in your e-mail client (which are easily spoofed). Here is a simple tutorial on analyzing Internet headers.

I. Where to find the e-mail headers?
A very frequent question. Let's review the common e-mail reading interfaces and where you can see the e-mail headers in them:

  • MS Outlook (all versions) - Point to a suspect email in your inbox and right-click. On the context menu, select Options. A new window will appear. In that window, the e-mail headers are displayed at the bottom, in the box titled Internet headers.
  • Outlook express (all versions) - Point to a suspect email in your inbox and right-click. On the context menu, select Properties. A new window will appear. In that window, click on the details tab. The e-mail headers are displayed in the box titled Internet headers for this message.
  • Gmail - When you open an e-mail message, at the top there is a link titled "Show original". Click on it and a new browser window will appear, with the e-mail header at the top.
  • Yahoo Mail - When you open an e-mail message, at the bottom there is a link titled "Full Headers". Click on it and the windows will re-render showing a very nice presentation of the e-mail header at the top.

II. How does e-mail headers work?

First, lets review how the SMTP (Simple Mail Transfer Protocol) works to transfer your e-mails. Let's assume that you are sending an e-mail message for mailto:webmaster@shortinfosec.net.

  • When you click send, your local mail server will receive the e-mail message for further delivery.
  • The mail server will then break the recipient address into user (webmaster) and domain (shortinfosec.net)
  • The mail server needs to know which mail server knows how to deliver an e-mail to webmaster@shortinfosec.net . For this, it will query the DNS server asking for a Mail eXchanger (MX) record for the domain shortinfosec.net.
  • The MX record is actually a DNS name of another mail server which is registered as authoritative for a specific domain - i.e. knows what to do with e-mails for that domain
  • The mail server contacts the MX server the shortinfosec.net domain and delivers the e-mail message. Then the MX server will follow internal rules on how to deliver the message to webmaster@shortinfosec.net
  • There are specific mail servers on the Internet called relay servers, which don't actually hold real mailboxes. They are usually hosted by ISP's and provide availability to receive e-mails for many domains, which are then internally delivered to the real mail servers residing on slow links or hidden behind corporate firewalls.
  • An e-mail message may traverse multiple hops on the Internet before being delivered to the recipient.
  • Each mail server that processes an e-mail message during it's transit will add a line to the e-mail header of the e-mail message. A legitimate mail server will NEVER rewrite or alter an e-mail header. This was originally designed for troubleshooting, but is very useful in spotting scams and fake e-mails

III. How to I analyze the e-mail headers?

Let's review a real life example: The following e-mail headers are from an e-mail that supposedly arrived from Chase Bank, and is a clear example of phishing attack (click for larger image)

NOTE: The real recipient, domain and it's servers are anonymized .

ANALYSIS:

  • The message claims that it was sent from smrfs@chaseonline.chase.com. This information can be very easily forged, so NEVER trust that information.
  • The useful information is in the "Received:" lines. Each of these lines represents a hop between two mail servers on the path from the sender to the recipient. These can also be forged, but there is a catch: A malicious mail server can forge the current headers, and at the end will have to send the mail to legitimate mail servers. The legitimate mail servers WILL RECORD the IP address of the sending e-mail server, and this information will ALWAYS BE TRUE.
  • So, the malicious sender has no control over the Received lines of the header.
  • The "Received:" lines are stacked on top of each other, so the first hop will be the lowest, and the last hop will be the first in the header. Therefore, to properly follow the path, read the lines bottom up.
  • So, reading our e-mail header, this e-mail was sent from an ADSL IP address registered to an ISP in Warszawa - Poland, and then had 2 more hops in the protection systems of the delivery ISP. Visually, this was the path of the mail:

IMPORTANT - You can easily check the registered owner of any address using SamSpade.org

  • Suddenly, it's obvious that this message has a slim to none chance of being sent by Chase Bank. There is absolutely no reason for them to send it via an ADSL address in Poland when they have huge corporate servers
  • There are two more elements that can be useful for analysis, although they can be forged:
  • X-USER_IP - the apparent IP address of the sending client computer
  • User-Agent- the apparent mail client program used to send the e-mail
  • In our example, the X-USER_IP points to 12.177.160.117 - an AT&T WorldNet Services address, and the User-Agent claims to be Tumbleweed Mail Gate server - both of which are highly suspicious, so we discard them

Conclusions

When in doubt about the authenticity of an e-mail message DON'T follow instructions within it and DON'T click on the attachments inside it. Instead:

  1. Open the e-mail headers and read where it came from. Usually, it's very easy to identify a fake message just from the path it took on the internet.
  2. If you can't identify the problem, just extract the headers and send them to your IT and Security Officer for analysis.

Related posts

Tutorial - Measures for minimizing Spear Phishing Attacks

Example - SMTP message spoofing


Talkback and comments are most welcome

Tutorial - Computer Forensics Evidence Collection

Following up on the Tutorial - Computer Forensics Process for Beginners , here is a step-by-step tutorial on how to process a suspect computer to obtain dumps of RAM memory and Disk Drive using Helix Forensic CD.

Our suspect computer is a Windows XP Virtual Machine.
Our Example Forensic Toolkit
  • Helix forensic CD - your basic tool for the investigation
  • Evidence USB - 16 GB Capacity - for removing smaller evidence files from the evidence computer
  • Analysis computer - a windows laptop, VDK driver, for the analysis computer (if using windows) - this driver will enable you to mount a DD image created during the evidence collection
  • Sophos Antivirus and A-Squared Free Antispyware detector software for the analysis computer
I. Running state evidence collection
  1. Insert the Helix CD in the suspects computer CD/DVD drive. The Helix has an autorun so should start immediately, but be careful. If you are logged on as anything other then an administrator, you won't be able to make a dump of the full physical memory. So close the autorun, and choose the Run as option to start the Helix software, and provide the Administrator credentials.

  2. WARNING - DO NOT log off the session in order to log on as an Administrator! Ending a session will inevitably change and contaminate the content of RAM, since a lot of processes are closed upon logoff.



  3. When Helix starts, there will be a warning screen stating that Helix won't be able to protect the suspect OS environment from changing, since it's running within the suspect OS environment. But, since there is no other way to take a snapshot of the ram memory, just choose accept.



  4. You will see the startup screen of the Helix tool. The first icon is just a preview of system info, so it's not too useful. Go ahead to the second option – acquisition. It will prompt you for the source. Choose physical memory, and direct the output to the evidence USB drive.
  5. Acquisition will prompt you for the source to be dumped – choose Physical Memory
    It will ask for second confirmation and will start the dump



  6. After Memory Dump is finished, choose incident response (3rd icon on the Left menu) and click on the small arrow to go to the second screen (shown below). Run WinAudit


  7. Click on the only link and let it perform inventory of the system. Save the result as a PDF on your evidence USB






After Winaudit finishes, close it, and close the Helix mainwindow. It will ask whether you like to record all activities in a PDF file. Confirm that you wish to and save the PDF on your evidence USB.
The above process will create an MD5 hash of the memory dump on the evidence USB. Open this file and take note of the MD5 hash.

II. Disk drive evidence collection
  1. Turn off the computer ungracefully, pull the plug - this will prevent any possible shutdown scripts from running and possibly erasing data on the computer.
  2. Boot it up again, and from the BIOS select to boot from CD-ROM. In a real corporate investigation, you may need assistance of IT to provide passwords, since most corporate PC's are set-up with BIOS password and disabled from booting from CD to prevent possible information theft.

  3. Boot the Helix Linux OS

  4. When booted, select Adepto from the Forensics Menu



  5. Similarly to the memory dump above, select the drive you wish to make a dump of, and select your evidence USB as destination. For hash, you can choose severa. The example is with SHA1. After the dump is finished, choose the last tab (report) and choose to save the dump report as PDF to the evidence USB.

  6. Copy all files to your analysis computer, and verify the hashes of the memory and disk dumps again using md5sum and sha1sum, whichever you used initially.


  7. Using VDK, mount a copy of the disk image for investigation. The mount command is: vdk open path_to_dump_file\dump_filename.dd /L:free_drive_letter

HERE You can download and review the forensic log documents created in this tutorial (5.19 MB ZIP file)

Helix_Evidence_Collection_Sample_Logs.zip
Verification sums:

  • SHA1SUM c7d189a78a715fd96127677d39d5ace1d5854ea5
  • MD5SUM 9b61fad0cf4418175cb7e387c6962c49

This concludes the easy part of computer forensics - evidence collection. Shortinfosec will follow-up with exercises of the analysis part.

Related posts

Tutorial - Computer Forensics Process for Beginners

Talkback and comments are most welcome

Tutorial - Computer Forensics Process for Begginners

Computer forensics is currently a very popular term, and a lot of conferences are organized and books written on the subject. This, together with the popularity of the CSI series, brings an aura of certain very special, even magical steps that forensics teams use. In reality, the computer forensics job is a standard process, and every one of us does parts of the process when we debug our computers. So, here is a simple tutorial on what is involved in computer forensics:

Computer forensics process


Below is a diagram of the forensics process. It is a generic process, but applies in computer forensics.



In order to properly apply the forensic process to computers, let's expand the generic diagram into the following:

As you can see, in computer forensics, a lot of evidence can be collected while the computer is running. That is a one-shot chance, and you'll never have it again when you turn off the computer.

Your Forensic Toolkit

Every trade needs it's tools. For the beginner investigator, here is my recommended toolkit:

  1. Helix forensic CD - your basic tool for the investigation
  2. Digital camera - capturing physical state of the suspect computer
  3. Evidence USB - 4 GB Capacity - for removing smaller evidence files from the evidence computer
  4. Evidence USB hard drive (500 GB will be enough for most purposes) - for making an evidence copy of the disk drive
  5. Analysis computer - probably a laptop, but sparkling clean - no viruses, Trojans, cookies or similar wildlife on it, since they can corrupt the evidence. Even if the evidence isn't corrupted, it may be considered as contaminated and become inadmissible in a formal case.
  6. VDK driver, for the analysis computer (if using windows) - this driver will enable you to mount a DD image created during the evidence collection
  7. Antivirus/Antispyware/Rootkit detector software for the analysis computer
Steps of the forensic process process
1. Evidence collection

1.1. While the suspect computer is running

  • Make an image of the RAM Memory, and store it on the evidence hard drive/USB. Make MD5/SHA1 hash of the image and save it and write it down in a notebook.
  • Make an inventory of all processes, network connections, installed software, hardware, everything you can about the computer. Save this in a file on the evidence hard drive/USB. Make MD5/SHA1 hash of the file and save it and write it down in a notebook

1.2. When the suspect computer is off

  • Make an image of the hard disk drive and store it on the evidence hard drive/USB. Make MD5/SHA1 hash of the image and save it and write it down in a notebook
  • Photograph the suspect computer from all sides. Save the pictures on on the evidence hard drive/USB. Make MD5/SHA1 hashes of the photographs and save them and write them down in a notebook.
  • If any immediate physical tampering is apparent, photograph it specifically, and possibly expand the investigation with a forensic expert who will look for evidence regarding the tampering method (fingerprints, tool markings)
  • Open the computer and photograph the interior under good lighting. Save the pictures on on the evidence hard drive/USB. Make MD5/SHA1 hashes of the photographs and save them and write them down in a notebook.

2. Evidence analysis

  • Load copies of the evidence images into your analysis computer. Confirm that the copies have the same MD5/SHA1 hashes as the original noted ones.
  • Search the raw images of the ram memory and the disk drive for strings, and save them for future reference

All following steps need to be used in the context of the investigation, so there is no specific exact step to use

  • Review the strings dump for specific keywords
  • If there are specific keywords related to your investigation ('payroll', 'salary', 'password', someones user name or e-mail address), search for those strings in the raw images. Save the results for future reference.
  • Mount the disk drive image as a read-only drive. Scan the drive for viruses, rootkits and spyware. Save the results as screenshot or log file
  • Analyze the event log of the suspect computer for any anomalies. Log anomalies with times of occurrence
  • Analyze the running processes log of the suspect computer for any suspicious processes. If found, refer back to the memory dump to investigate the process (memory content, using a hex editor and string search)
  • Find pics/movies/docs/web-mail and log positions for review. Alternatively, review them immediately for specific issues
  • If applicable, use steganography detection software to detect hidden data in images and music.
  • Analyze browser cookies for connection to specific sites or Internet activity
  • Analyse e-mail records for connection to specific sites or Internet activity
  • Investigate files in slack space (deleted from the File Allocation Tables but not physically from the disk)

3. All incriminating evidence (context dependent) are to be logged with original timestamps and with appropriate presentation (screenshots, text dumps, audio recording)

This is by no means a definitive and final tutorial. Shortinfosec will follow-up with excersises and a demo dump for the readers to dissect in the comfort of their own home.

Talkback and comments are most welcome

3 Rules to Prevent Backup Headaches

Any modern IT infrastructure needs and (usually) has a solution for backup of information. But due to the constant drive to reduce expenditures, very undesirable situations can occur, such as not being able to read the backed-up data.

Example scenario:
A telco company has two data centers- one operational and one warm backup datacenter which is kept in sync via replication services. Due to rise in capacity of stored data, the old tape library from the primary datacenter has to be replaced with a much larger tape library to accommodate proper backup.
The old tape library is still operational, and is moved to the warm backup datacenter to provide backup for the servers in the backup datacenter, should they become operational.
After 6 months, a major power failure occurs and the backup datacenter needs to be brought online. During the process, it is concluded that one of the ERP databases became corrupted during the replication and cannot be recovered. Since tape backup from the primary location is kept offsite in a bank vault, the tape backups of the ERP database are taken from the bank and brought to the backup datacenter. Upon attempting to restore the data from the backup tapes, it is concluded that the tapes are unreadable, and the database cannot be restored immediately.
The database is restored to an old backup and then rebuilt by manual entry over the course of 5 days.

Analysis:

  1. The vision of the backup systems operation for the primary and backup locations was that the servers at the respective location will backup to and restore only from the local tape library.
  2. Nobody bothered to check whether the old and new tape drives and tapes are compatible with each other and whether tapes from primary location can be read at the backup location and vice-versa.
  3. This led to the problem in which the last resort - the tape backup, although properly archived and protected was unusable.

Recommendations:
To avoid such and similar problems, follow these rules
  1. Make sure that you have full compatibility of all tape drives used within the organization - such compatibility will ensure that you can easily use any drive for any tape, even move one drive to a specific location if the need arises.
  2. Make sure that your tape drives are functional - perform regular 'exercises' of backup and restore of ALL drives within the organization. If you don't do this, by Murphy's law, the only remaining drive you have during an incident will be clogged up with dust or simply failed
  3. Make sure that your tapes are functional - perform regular 'restore exercises' for all tapes, and keep track of tape lifetime. The last thing you want is to have a possibly failing tape during a disaster recovery.

Talkback and comments are most welcome

5 SLA Nonsense Examples - Always Read the Fine Print

I've had the opportunity to review several poor Service Level Agreement (SLA) contracts, which include clauses shielding the provider as if they are an endangered species. These clauses are usually masked under "general clauses" or fancy legal lingo to possibly go un-noticed.

Here are several examples of texts that a customer should watch out for in a Service Level Agreement:

1. The data protection trick

  • Sample clause: The provider will protect and not reveal any received or collected information about the buyer, unless it's required by legal authorities during a formal investigation or in case of protection of provider's interests
  • Analysis: Although this particular clause may vary from country to country (legal system differences), there is NO LOGICAL ARGUMENT for anyone to reveal your information for protection of their interest.
2. The no responsibility trick

  • Sample clause: The customer will hold harmless and indemnify the provider from all errors, damage or data loss, loss of business, delays in processing or any other problems resulting from usage or inability to use this service. The provider is not responsible for any damage to hardware or systems during the installation or maintenance of the service
  • Analysis: While a relatively standard clause, always have your legal team AND your technical team review and dissect this clause. In the example, the bold sentence wording actually makes the provider not responsible for any screw-ups during installation, even if their technician placed a 110V line in a 300V outlet, or used a drill to tighten a screw of the serial port.
3. The automatic consent trick

  • Sample clause: The provider reserves the right to modify the conditions of service, and the modifications will be considered agreed to in case of service contract renewal.
  • Analysis: An SLA can be written to refer to certain general conditions related to the service. A provider can modify these formal conditions without proper communication to the customer. Since most contract renewals are automatic, this can suddenly put the customer in a very bad position even if the initial SLA contract was good. Always insist that all agreed changes to service must be signed off in a dedicated document.
4. The service quality trick

  • Sample Clause: Our service has a service quality of XX% (delay, latency, bandwidth)...In case of unforseen circumstances, this quality may be reduced.
  • Analysis: Nobody signs and pays for an SLA to guarantee services in ideal circumstances. The term unforeseen circumstances is simply get out of jail free card. For instance, even rain can be an unforseen circumstance for a poorly protected wiring cabinet, but it's not something that the customer should worry about. If special circumstances need to be addressed, they need to be properly itemized, without room for different interpretation.
5. The business hours trick

  • Sample Clause: All service activities are performed during the 8AM to 6PM. If the customer requires intervention outside of business hours, such intervention will be charged according to regular pricing policy.
  • Analysis: This clause may have a place in a standard contract. When an SLA contract is signed, it's levels are above the standard contract, and are appropriately priced. So, if one is paying for a level defined in the SLA, the price MUST COVER ALL POSSIBLE SCENARIOS.

Related posts
9 Things to watch out for in an SLA
The SLA Lesson: software bug blues


Talkback and comments are most welcome

Tutorial - Using Ratproxy for Web Site Vulnerability Analysis

After Shortinfosec compiled the Ratproxy tool for Windows, we got e-mails with complaints that the it is still unclear how to use this tool. Therefore, Shortinfosec is following up with a tutorial on using Ratproxy.
NOTE: Shortinfosec will present a demo analysis and report, but will not delve into actual compromise of the concluded vulnerabilities

A hacker that attacks a web site will analyze the entire structure of the site, and use his experience and external tools to identify the points where he will be able to compromise the site. Ratproxy is emulating this operation by functioning as a web proxy for the users browsing. This way, ratproxy is able to intercept and analyze the entire communication and content of the analyzed site.

The difference between a hacker and ratproxy is that ratproxy will identify potential vulnrabilities but will not compromise, just report them.

Ratproxy program with or without potentially disruptive tests. The difference is in the X (disruptive) or x (non-disruptive) switch. Here is a command activating ratproxy with disruptive functionality:
ratproxy -v ratproxy -w report.log -d domain.com -leXtifscg


After that, the folder in which ratproxy is run from will contain a file called report.log. To make it human-readable, you should run it through a parser, downloadable from
http://code.google.com/p/ratproxy/source/browse/trunk/ratproxy-report.sh?r=9

You should run it from a cygwin shell. Make sure that it's a UNIX formatted file (LF/CR), otherwise the shell will report errors.

The parser should be run with the following command
$ ~/ratproxy-report.sh report.log > report.html

When the ratproxy.log file is parsed, it will create a html file. Below is a screenshot of the report


The report will organize concluded information by type of possible error encountered and then by criticality of specific issue which is identified.
Shortinfosec has created a sample report from scanning a localhost Apache 2.0 server with a CMS Made Simple site. You can download the sample report here.

Obviously, there are other products which perform the same function like WebScarab, Paros, Burp, and ProxMon, so what is the benefit of ratproxy?
According to ratproxy doc,

it is designed specifically to deliver concise reports that focus on prioritized issues and to do this in a hands-off, repeatable manner. It features a sophisticated content-sniffing functionality capable of distinguishing between stylesheets and Javascript code snippets, supports SSL man-in-the-middle, on the fly Flash ActionScript decompilation, and even offers an option to confirm high-likelihood flaw candidates with very lightweight, a built-in active testing module.

Related posts
Ratproxy - Google Web Security Assessment Tool
Google's Ratproxy Web Security Tool for Windows

Talkback and comments are most welcome

Google's Ratproxy Web Security Tool for Windows

In our previous post, we announced the new security tool - Google's ratproxy. It functions as a proxy, much like paros.
Shortinfosec has compiled ratproxy v1.51 on windows.

You can download compiled ratproxy-1.51.exe for Windows here

Verification sums:
ratproxy-1.51.exe SHA1SUM 42dbe6ffa00a3987f32b19a7c6e9ca84240db157
ratproxy-1.51.exe MD5SUM c41acfd5ab7874dfef3970ac52eb2a9b

In order to run it, you need to download and install cygwin runtime, since ratproxy is dependant on several cygwin libraries. Do not forget to update your path variable to include c:\cygwin\bin.

Quickstart
To run it, use the following steps

  1. create a report directory (report_outdir)
  2. type ratproxy -v report_outdir -w report_filename -lfscm
  3. reconfigure your browser to use proxy on address localhost:8080
  4. Start browsing, ratproxy will create reports.
Report parsing
Copy the report generator from this location, and create a file from the text. It's a bash script, so You should run it from a cygwin shell. Make sure that it's a UNIX formatted file (LF/CR), otherwise the shell will report errors.
http://code.google.com/p/ratproxy/source/browse/trunk/ratproxy-report.sh?r=9

It creates a HTML report from the raw report generated by ratproxy.

Related posts
Ratproxy - Google Web Security Assessment Tool

Talkback and comments are most welcome

Designed by Posicionamiento Web