Hunting for hackers - Google fraud style

A lot of people on the internet are aware of the Google Adsense Fraud algorithms.
But a few people are aware that the algorithms that help Google track down fraudsters are also very useful at hunting the careful and thorough hacker.

Here is the logic that applies both to hunting hackers and hunting fraudsters:

  1. The attacker will make repeated attempts
  2. The attacker has finite and limited resources at his disposal
  3. The attacker is geographically relatively fixed
  4. The attacker will try to be connected as little time as possible
Naturally, both you as well as Google need a good database of events to analyze. Google has their web server logs, and you will have to implement a collective log of firewalls, servers and routers to be on par. Of course, the first question is - what data to collect?

Here is what Google can analyze when hunting adsense fraudsters:
  • Referrer URL of the visitor who clicked on an and
  • IP address of the visitor who clicked on an and
  • Autonomous System (provider) of visitor who clicked on an and
  • Browser and OS version who clicked on an and
  • Time of visit who clicked on an and
  • Length of visit who clicked on an and
  • URL of visited site who clicked on an and
  • Site Content Variation - to hunt for content copy sites
  • Available Cookies on a Google site

Having the same approach, here is what you can analyze when hunting for hackers
  • IP address of connecting machine
  • Autonomous System (provider) of connecting machine
  • Destination service/port
  • Time of session (for TCP connections)
  • Length of session (for TCP connections)

With this information, you can write a program to analyze the following
  • Match nonstandard port attempts from same Autonomous System within given interval (say 1 week)
  • Within the Autonomous System, find all connections from the same IP address or same pool (C class)
  • Look for very short sessions from same AS and/or pools
  • Look for variations of destination ports and protocols from same C class pool
Any matches over 3 events within a week should alert a human security officer to conduct a more detailed analysis.

Talkback and comments are most welcome

Related posts
Portrait of Hackers

Checking web site security - the quick approach

One of the most frequent questions delivered to a security officer is: Is this web site secure?
While a proper answer can be obtained only through a full blown penetration test, there a quick approach which will yield a very good "feel" of the site security.

The approach
In order to obtain relevant results by this quick approach, you need to assess the web site from the following aspects:

  1. Overall server weaknesses
  2. Web server weaknesses
  3. Web site/application weaknesses
The tools
To achieve the quick solution, one must use the proper tools of the trade. Luckily, the tools of the quick approach are free and very efficient:
  1. Nessus - For server weaknesses assessment
  2. Paros - For web server weaknesses assessment
  3. Ratproxy - For web site/application weaknesses assessment
The process
If the target is not owned by your company, be sure to obtain consent from the owners for the scanning. Use the tools in the sequence in which they are presented
  1. Nessus - For server weaknesses assessment - Choose default scan and if possible, choose distruptive scans and let it rip. It generates a HTML report as it scans the target.
  2. Paros - For web server weaknesses assessment - Paros functions as a proxy. Once you run it, reconfigure your browser to use a proxy and select localhost at port 8080 as the proxy. This will send all requests through Paros and let it capture the site. Make sure you just visit the target site. After this, choose Analyze-> Spider, and Analyze -> Scan. After the scan is finished, choose Report -> Last Scan Report to get the HTML report.
  3. Ratproxy - For web site/application weaknesses assessment - Functions much like Paros. Once you run it, reconfigure your browser to use a proxy and select localhost at port 8080 as the proxy. This will send all requests through Ratproxy. Make sure you surf every possible link and use every possible function of the site. Once you are finished, parse the report output through the parser to get the HTML report.
Each of these tools will provide very clear reports. Look for weaknesses that are marked medium and above. Then investigate the reports and recommendations on each to evaluate the actual risk to your company.

When you complete this process, if the web server hosts other sites, use Ratproxy on as many of them as you can, to asses the possible risk to your site via attacks delivered through other sites.

Talkback and comments are most welcome

Related posts
Protecting from Meddling Web Applications
Strategic Choice - Proper Selection of Web Hosting
Web Site that is not that easy to hack - Part 1 HOWTO - the bare necessities
Web Site that is not Easy to hack - Part 2 HOWTO - the web site attacks
Rules for good Corporate Web Presence

WMI Scanning - Excellent Security Tool

When doing a security assessment for a large organization, you need to collect a multitude of information for a proper assessment.
One of the essential elements in a network assessment is systems inventory. While most security personnel would use a port scanner to scan the full IP range of the organization, when analyzing a windows environment there is another tool that should be used in coordination with a port scanner.

The tool
When scanning a Windows environment, a WMI (Windows Management Instrumentation) scanner is a valuable assistant. The tool that i'm using is WMI Asset Logger. The tool is deliver by John J Thomas and is freeware.

The process
The WMI Asset Logger will just require a domain admin username and password, it will query the domain for registered computers or ask for a target computer. Then it will query each computer to give you a nice overview of current computer status on the network.

The results are presented in the GUI, an example presented below.



The benefits
Ofcourse, one can always comment - what are the benefits of using a WMI scanner?

  • Verify inventory delivered by the IT personnel - with WMI AssetLogger you can create a rapid report with which you can compare the report delivered by IT and verify their formal statement.
  • Make rapid checkup of installed OS versions and Service Pack - Quite often, your first priority is verification of installed OS consistency. With WMI you get a birds-eye view of installed OS of all windows machines
  • Create a relevant inventory for comparison on subsequent controls - the report is easily exportable into XLS or Tab delimited file, so it's easy to load results into a database for comparison of subsequent scans (monthly or quarterly)
  • Find primary targets for deep inspection - Based on simple rules and pairing with the results of a port scanner, you can find interesting targets for deeper analysis

Talkback and comments are most welcome

Related posts
TrueCrypt Full Disk Encryption Review
Creating Your Own Web Server

Is the Server Running - optimal use of redundancy on a budget

When purchasing a server, most companies select a server class computer from a reputable manufacturer. And in this day, usually the servers come loaded with redundant components to optimize server availability and make it more resilient. And yet a lot of these servers fail at the first glitch simply because they are not configured properly. Here is a brief blueprint on how to optimally utilize the purchased and paid redundancy.

First, let's analyze what is usually redundant in a server. If we take into account only the garden variety commercial servers and ignore the hugely expensive fault tolerant machines, here is what you usually get:

  • Redundant Disk drives
  • Redundant Power Supplies
  • Redundant Network Adapters

To achieve a maximum from these elements, you should perform the following steps:
  • Redundant Disk drives - organize them into a RAID configuration. RAID 1 (mirror) is the best in terms of redundancy and speed. But you loose exactly 50% of capacity. RAID 5 (parity) gives you the best trade off between capacity loss and optimal performance. When planning a RAID, look for a server that has a hardware RAID controller. The modern server operating systems can make a RAID themselves, but this way the operating system has to dedicate resources and have specific software to maintain the RAID - thus burdening the main CPU with this task

  • Redundant Power Supplies - connect all power supplies of the server to power lines coming from a different circuit breaker. This will save you a lot of grief if the cleaning lady decides to connect her vacuum cleaner to an outlet connected to the same circuit breaker as the server and overloads it. If possible, connect all power supplies of the server to different Uninterruptible Power Supplies. This way, all UPS systems will help your server ride out the blackout.

  • Network adapters - First, organize the network adapters to work as a failover team. This is realized with specific drivers delivered by the manufacturer, and the driver creates a virtual network adapter. The virtual network adapter is configured with the IP address of the server, and it binds to one of the physical network adapters. Should the adapter loose connectivity, the driver will bind the virtual network adapter to the other physical one, thus reestablishing connectivity. To achieve optimal solution, connect the physical network adapters to several switches which are interconnected via trunk links - thus creating one large meta-switch.

All described actions can be performed by your in-house system administrator, and do not require any special expertise. With these simple steps, you'll achieve excellent availability of your server.

Talkback and comments are most welcome

Related Posts



Software vendor relationship - can you make it better?

Your company bought a corporate software solution. Your teams tweaked, modified and tested to get it up to your requirements. Now, you just continue to use it for the next 20-30 years without problems. Right?

Well, not quite. The marriage between a corporation and a software vendor has a tendency of turning ugly as time passes and here is why:

  • Software Vendor Greed - You are tied up into maintenance and upgrade contract, with a yearly fee. And lately, the largest software vendors are increasing these fees as new sales are dropping. The latest example are SAP and Oracle, and they are actually blaming it on Inflation - Here is a great article on this tendency http://blogs.zdnet.com/BTL/?p=9717
  • Customer treatment - After a corporation has migrated it's core data into the new software, and sufficient delta time has passed to make the reverse migration into the old system impossible (usually 3-6 months), the software vendor relaxes. He know that the customer is his for the foreseeable future, since migration back or to another system is way too costly, in time, money and human effort. So the software vendors becomes less responsive, focuses on new deals, and in extreme cases even becomes outright arrogant
  • Software Quality Failures - What initially seemed like a minor issue, can grow into a big ugly monster of a bug as the dataset grows, or as errors creep into the system. And the software vendor may choose not to address the core problem, simply because it is too costly or not really possible to be fixed without a full overhaul. So what usually happens is that your company ends up throwing ever more powerful hardware at the problem in the hope that raw speed will help alleviate the issues.

So, is there a way to kick the software vendor where it hurts and make them work as good as the first time they sold a solution?

There is no silver bullet solution, but the following suggestions can help a lot:

  • Put a big stick in the purchase contract - Include software issues resolution time and change request reply times bound with severe penalties in the original purchase contract. This way, all you need is to enforce this SLA every time the software vendor slips. Pretty soon the software vendor will have to bite the bullet and start dedicating it's resources to you - simply because it will cost them way too much to treat you bad.
  • Put a carrot in front of the software vendor - Place a condition of payment for any new expansion or module purchase with clearance of all outstanding issues in the original software.
  • Always plan a contingency - Have a planned alternative solution. This is the most difficult solution - and the most costly to complete. But when in dire straits look at alternative solutions - especially fully managed (outsourced) alternatives. With these alternatives your organization is the user of a software, and most of the effort of migration in terms of hardware and resources is offloaded to the outsourcing company. Oh, and by the way, once the software vendor understands you have an alternative, quality will definitely improve.

Talkback and comments are most welcome

Related posts

Information Risks when Branching Software Versions

3 rules to keep attention to detail in Software Development

8 Golden Rules of Change Management

Application security - too much function brings problems

Security risks and measures in software development

Security challenges in software development


High Availability - Clusters have Issues

As IT services become more and more important to the organization, the notion of the a service being down becomes scary. So the organization begins to search for ways to make the IT services more available. The usual solution to high availability is to place the IT service on a cluster system.

So, let's start with a definition
A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. They come in three generic flavors:

  • High-availability (HA) clusters - implemented for the purpose of better availability of IT services
  • Load-balancing clusters - distributing a workload evenly over multiple nodes
  • Grid computing - large sets of computers optimized for workloads which consist of many independent jobs or packets of work

High Availability Cluster
For a typical corporation the 'weapon of choice' is the High-availability cluster. The simplest form of a high availability cluster contains two computers and a shared disk resource.


Most high availability cluster run in a 'failover' mode, also known as 'active/standby' mode. This means that one of the computers (nodes) is running the IT service (web server, database server or similar) while the other node is idling and waiting for the first node to fail.


Should it fail, the second node will take over the IT services and related resources - usually disk volumes, ip addresses and hostnames and continue to run the service. This takeover takes anywhere from several seconds to a minute, which is acceptable for most types of services.

The process of takeover includes a process called 'voting'.
  1. Both nodes are checking each other's health at regular intervals. This health check is known as a heartbeat
  2. In the case when one of the nodes does not respond, the second one will assume that the first one has failed, and it needs to take over the IT service that needs to be run.
  3. The problem with the immediate decision to take over is that the missing response can be just a connectivity issue, in which case the first node is still up and running - and both nodes will end up fighting over the IT service. This is known as a 'split brain' cluster
  4. To avoid this situation, an odd numbered element must be included. Since a third computer can be expensive, a usual third element is a disk drive that is connected to both servers. This disk drive is known as a 'quorum disk'.
  5. So, in case of a failure, the surviving node will first contact the quorum disk and perform a 'vote' - usually write a file and wait a predetermined time to see whether the other node will erase it. If the file is there, the vote is successful, and the surviving node will take over the IT services.
This entire voting process takes several milliseconds so it does not delay the fail over process

Issues
Naturally, there are issues with using clusters. Here are the most common
  • Cost - Cluster systems need specific cluster aware software, the hardware is usually highly redundant and the shared disk systems are quite expensive.
  • Resource Waste - In failover cluster - the most common variety, one of the cluster computers is mostly idle, just sitting there and waiting for the first node to fail.
  • Difficult performance scaling - In failover cluster, if the current cluster node does not have sufficient power, it is not easy to replace it with a faster cpu. Everything inside a computer designed to run in a cluster is more expensive and needs special approval by the cluster software vendor to confirm that it is compatible with a cluster solution. And even if you manage to upgrade the system, you are careful to upgrade both nodes, so if failover occurs the performance remains the same.
  • No protection against software error - In essence, the cluster is not a silver bullet. It protects against hardware error, but in no way helps against corruption of information caused by faulty software or human error.
Conlcusion
The High Availability cluster is an excellent solution for increasing IT service availability - if you can live with it's issues:
  • For maximum effect it needs to be supported by methods of protection against software or human error (backup and archive)
  • For resource waste, you can run several IT services and balance them on both nodes, so each node acts as failover for the services running on the other node. But bear in mind that when a failover occurs, you'll have to run all services on one node - thus creating a possible performance issue.
  • If cost of hardware and upgrades is a major issue, you can even consider an assymetric cluster - one node being much more powerful then the other. This is a double-edged sword: should a failover occur you'll be left running on considerably lower resources which may not be accepted by the organization

Talkback and comments are most welcome

Know the Difference - Backup vs. Archive

Information availability and IT operations require Data Backup. Legal and Compliance requirements dictate Data Archival. But many organizations make the mistake of equalizing Archive with Backup, which can lead to wrong choice of backup or archival media, very poor restore time and even loss of information.

Example Scenario
As part of an audit, an auditor reviewed the backup and archival system of a company. The company presented their backup systems, access controls and audit. When asked about archived data, they again pointed to the tapes containing their backup. But their backup tapes are rotated every 6 months, so the company does not have any archive from earlier then 6 months ago.
The company failed the legal Archival requirement.


Analysis
In order to properly design and architect a backup or archive systems, one must clearly understand the differences between backup and archive:

Backup
The key reason for the existence of backup is to provide an alternative data source in case the primary data source is corrupted or destroyed. A Backup process is creating a copy of the current state of data. It is understood and accepted that the state of the backed up data will change in the future under controlled circumstances. At that point the old backup will become irrelevant for operational purposes and the data will need to be backed-up again.

Criteria for selecting a backup solution

  • The backup needs to be accessible fast
  • The media should be reusable for maximum cost efficiency
  • The media should survive transport in less then ideal conditions (trunk of a car)
  • The backed up information should survive with full integrity and availability for several months on the backup media.
  • The backup should be able to span multiple media (if backup set is larger then media capacity).
  • The solution should be intelligent enough to enable different backup sets (full backup, incremental backup, differential backup etc)

Archive
The key reason for the existence of archive is to provide historical reference of information. The archive's process final product is a long term non-changeable copy of data or information. It is understood and accepted that the archive media must be resilient, capable of surviving over long periods of time (years) and must guarantee that the archived data remain unchanged during the entire archive lifespan.

Criteria for selecting archive solution
  • The archive media needs to be able to operate with different data collections while treating them at the same level of integrity - individual data records from a database as well as entire documents,
  • The access speed to an archive can be slow, but archive media should have an extremely high level of reliability (remember, archives can span several decades)
  • When creating an archive, always plan the lifetime of the archive, and make sure that the manufacturer will provide systems that can retrieve the stored data - having an archive that is unreadable because there is nothing to read it on is a terrible idea.
  • Data integrity must be maintained over the entire period of the archive existence - there is no point in having an archive if you can't trust that it's the same as it was when archived.
  • There should be an index of archive media to retreive relevant information from archive

Conclusion
Backup and archive solutions may be part of an integral system, but they perform a different function, so the actual media and individual systems will most likely vary.

While backup is still performed mostly on magnetic tapes, archive is usually performed on optical disks or microfilm. You may choose magnetic media for archive, but if you do, you need to plan that your archive tapes must be shielded from long term adverse influences, and you must maintain a functional reader for the tapes over the entire lifespan of the archive.

Talkback and comments are most welcome

Related posts
3 Rules to Prevent Backup Headaches
Business Continuity Plan for Blogs
Further resources and options for educating yourself in IT terminology
can be found here and here

New Helix3 Forensic CD - Welcome

E-fense has published a new version of their acclaimed Helix Forensic Live CD. It is now in version 2.0.

UPDATE: Helix3 is no longer a free product. e-Fense decided to make it a commercial product

Just as the old version, the new one contains two major components

  • A LiveCD (Based on Ubuntu) - A full blown forensic toolkit with a nice all-encompassing set of tools.
  • Windows set of tools - which allow the user to use a subset of forensic tools within a running windows system (most often during first response).
The Windows toolkit is maintaining the same interface as before, but the windows based application set is coherent, there are no missing applications. The previous version had a number of links in the windows toolkit that weren't working, which could cause a lot of grief at the wrong time.

Just a reminder of the Windows Helix Menu


The Linux LiveCD interface has seen a major overhaul. It is now based on Gnome, and the overall interface is much better organized.

The following screenshot depicts the new Helix boot menu


Unfortunately, probably in search of a better overall performance, it is departing the Forensic track and moving much more into mainstream - The toolkit is missing a lot of nice new Forensic tools that could have been installed and utilized. Hopefully, they'll be included in the next version.
There is one new major feature that was missing from the previous version - the LiveCD can now be installed on a hard drive - effectively creating a full blown Forensic investigation computer without the need to lug around a bootable CD.

The installer suffers from several bugs, so make sure you partition the target hard drive manually - the automatic option doesn't work

The following Screenshot depicts the installed version of Helix


The new version of Helix is much easier to use and overall a much more completed product.

UPDATE: Helix3 is no longer a free product. e-Fene decided to make it a commercial product


Talkback and comments are most welcome

Related Posts
Tutorial - Computer Forensics Process for Begginners
Tutorial - Computer Forensics Evidence Collection

Strategic Choice - Proper Selection of Web Hosting

The time of expensive hosting and limited functionalities on web servers are long gone. Today, everyone and their mother is doing web hosting, with a huge hosting disk capacity at very acceptable prices. But even though most hosting providers differ only in the price on paper, things are much different in the real world.

You can get stuck with a poor hosting, a lot of non-functional elements of the site and even huge downtime on your site.
Here is a practical approach to selecting a good but Affordable Web Hosting provider. In order to properly evaluate them, you'll need to engage both your technical and business teams.

Make a table like the one on the following slide and start grading according to the following bullets


  1. Business Support Quality - Through this category, you will evaluate how prepared the hosting provider is to meet your business expectations of hosting. When evaluating business support quality, you need to answer the following questions. Add two points for each Yes answer to your business support category grade:
    • Does the hosting providers' sales rep answer to calls and e-mails in a timely manner?
    • Does the hosting providers' sales rep try to understand what you are trying to achieve?
    • Is the sales rep discussing meeting your requirements?
    • Does the sales rep provide direct contact with a dedicated technical person for clarifications?
  2. Technical Support Quality - Through this category, you will evaluate how prepared the hosting provider is to meet your technical requirements for hosting. When evaluating technical support quality, you need to answer the following questions. Add two points for each Yes answer to your technical support category grade:
    • Does the hosting providers' technical support person answer to calls and e-mails in a timely manner?
    • Does the hosting provider actually support the technical requirements of your site?
    • Does the hosting providers' technical support person answer your team's technical questions in a clear manner?
    • Does the hosting providers' technical support person ask for clarification on your requirements?
    • Does the hosting providers' technical support person warn you of any specific policies and limitations in their hosting solution that might hamper you?\
    • Does the hosting provider offer remote tools for web site technical side management (service stop/start, add-ons and libraries management etc..)
  3. Hosting Solution Breadth - Through this category, you will evaluate what other services you might be able to utilize in the near future combined with web hosting. When evaluating hosting solution breadth, you need to answer the following questions. Add one point for each Yes answer to your solution breadth category grade:
    • Is the hosting provider prepared to take over DNS hosting?
    • Is DNS records management available to your technical staff via remote interface?
    • Is there a e-mail service available?
    • Can the e-mail service capture all e-mails for you if necessity arises?
    • Are they offering any other services as bundle or with additional payment?
  4. Hosting Contention Ratio - Through this category, you will evaluate how many other sites you'll have to compete with for server resources, and how many different sites can impact your own in terms of security since they are on the same server. When evaluating contention ratio, you need to answer the following questions. Add one point for each Yes answer to your contention ratio category grade.
    • Is your site on a dedicated server?
    • Is your site on a server with no more then 50 large customer sites?
    • Is your site on a server with dedicated and isolated resources from other sites (virtual machine or chroot type of isolation)?
  5. Error Recovery - Through this category, you will evaluate how will the hosting provider react to recover your web site should an error occur. When evaluating error recovery, you need to answer the following questions. Add one point for each Yes answer to your error recovery category grade
    • Is backup of the site performed daily?
    • Is backup of the site performed together with backup of the site's backend database
    • Is hacker attack detection/prevention present?
    • Will you get alerting/notice from the provider if suspect hacker activity is detected?
    • If site defacement occurs, can the hosting provider recover to a working site within 15 minutes of detection or notice bu you?
    • If site defacement occurs, is proper forensic investigation performed with results submitted to you?

After you've finished answering your questions, you'll have a table like the one below


Select the top 20% providers from the Total grades and add the pricing of their solution. The cheapest one will be your Affordable Web Hosting provider. You can afford to pay him, but you don't need to accept low quality.
Talkback and comments are most welcome

Related posts

Rules for good Corporate Web Presence
Creating Your Own Web Server
Tutorial: Making a Web Server
Web Site that is not that easy to hack - Part 1 HOWTO
Web Site that is not Easy to hack - Part 2 HOWTO - the web site attacks

GPS Fleet Tracking - Risks or Benefits?

GPS Fleet Tracking is usually associated with taxi fleets, armored transport and police/security vehicles. In reality, a lot of companies use GPS tracking not just for their company fleet, but also for personal tracking of their top employees or sensitive equipment. And GPS itself brings a whole new challenges to information security.

The Functionality
The Global Positioning System (GPS) is a Global Navigation Satellite System developed by the United States Department of Defense. It uses a constellation of between 24 and 32 Medium Earth Orbit satellites that transmit precise microwave signals, that enable GPS receivers to determine their current location, the time, and their velocity (including direction).

The GPS Fleet Tracking uses a GPS receiver paired with a radio transmitter. The GPS receiver determines it's location, direction and velocity and transmits this information to a central monitoring system via the radio transmitter. The radio transmitter part is most frequently a GSM mobile phone device which transmits the data via GSM Data or GPRS data capability as TCP/IP packets.
The central monitoring system is a server that receives the packets sent by the GPS tracking devices, stores them in a database and presents them as an overlay on a map.

The following diagram presents the overall system:

  1. The GPS receiver contacts the GPS satellites and calculates it's position, velocity and direction. At any given time, the GPS receiver has at least 3 satellites over the horizon to contact
  2. The GPS tracking device sends the calculated information via the GPRS data link to the information hub
  3. The information hub relays the received information to the GPS Tracking server
  4. The user uses the monitoring station to follow the fleet or to review the information about any vehicle stored in the database.
BrickHouse Security has a very comprehensive selection of GPS Fleet Tracking solutions.

The Business Benefits
There are well known business benefits of using a Fleet Tracking system. Here are several:
  1. Tracked vehicles are used much more responsibly and only for the intended purpose (no detours to buy groceries, or weekend trips to the lake).
  2. Because they are used for the planned purpose, the fuel usage is much more optimal.
  3. Ability to observe employee vehicle usage to establish their responsibility towards company assets.
The Physical Security Benefits
Apart from a clearly business perspective, GPS Tracking has security benefits
  1. GPS Fleet Tracking enables stolen vehicles to be recovered very fast.
  2. Paired with a panic button, it can be used for tracking and helping kidnapped or blackmailed key personnel (the chief officers and other key employees can be equipped with such GPS Tracking device)
  3. Valuable or sensitive equipment or assets can be observed during transport to identify situations where the asset has deviated or been delayed in transport - a major indication of attempt at tampering or theft

The open and sensitive questions
Naturally, every new system brings new challenges for information security. Here are the most common ones connected to GPS tracking:
  • How do you secure your GPS tracking database - the GPS tracking data is sensitive to say the least. Anyone stealing that data can analyze the travel patterns of each vehicle and subject tracked and plan a possible theft or crime. Also, the GPS tracking data will identify the 'blind spots' where tracking is impossible, like tunnels, parking structures, even streets with train tracks above them - which are first choice for theft.
  • How do access the GPS tracking data? - if one cannot steal the information from the database, it can be stolen in transit. If the monitoring station and the servers are at a distance from each other, always use an encrypted channel to access this information.
  • Do you inform your employees of GPS Tracking systems? - Informing the employees that their vehicles are tracked is a double edged sword: If you do inform them, they should be more careful, but on the other hand some of them will go to great lengths to destroy the GPS device so they can go about their way as they used to. If you don't inform them, you can end up in court for a number of infractions - depending on the judicial system
  • Do you control against rogue GPS devices - just as you use GPS for a legitimate function, a criminal may use a rogue GPS device to simply collect information off your vehicles. There isn't a very easy to find such devices once they are planted, but it is much easier to control the access to the relevant vehicles to prevent a criminal from approaching them for a time that will enable him to plant the rogue device.
Conclusion
The GPS Fleet Tracking systems are very useful systems, and can enable the company to achieve considerable savings to their fleet management, as well as provide additional security leverage for personal and asset safety.

But at the same time, it introduces a new system with it's own IT and communications requirements, and another repository of highly confidential data.

So any company implementing a GPS Fleet Tracking system should clearly define its objectives and requirements, and seek out a professional integrator to deliver the entire solution, always bearing in mind that the solution must be both functional and secure.


Talkback and comments are most welcome

Controlling Firefox Through Active Directory

Firefox is a great browser. But it is being widely avoided by corporations, since it is difficult to manage Firefox through a corporate-wide security policy, like IE through Active Directory.

FrontMotion has published FrontMotion Firefox Community Edition - a Firefox with the ability to lockdown settings through Active Directory using Administrative Templates. The concept is interesting, but how well does it work?

Here is a review of the FrontMotion solution for Firefox and Active Directory Integration

The Test
FrontMotion has prepared an MSI package of Firefox, with several modifications to enable group policy integration, as well as the administrative templates for Firefox.
Download the administrative templates (firefox.adm and mozilla.adm) and add them to your Group Policy Editor.


You get the following configuration parameters in the Group Policy - Administrative Templates for both under user and computer configuration can configure the following elements in the Firefox Section

  • General Settings - centraly configure and enforce Home Page setting for the Firefox users/computers
  • Enable Automatic Image Resizing - self-explanatory
  • Disable Firefox Default Browser Check - self-explanatory
  • Cache - setting cache size and path
  • Set Default Download Location - downloads path setting
  • Proxy Settings - centrally configure and enforce proxy setting for the Firefox users/computers
  • Disable XPI Installs - block installing of Moziila extensions
A configured policy is presented on the following image.


Upon testing, we installed the Firefox Community Edition and applied the configured policy.


When we ran Firefox and tried to change the proxy, we were unable to, as can be seen on the image below.


It can be confirmed that the overall Active Directory Group policy functions well. However, the number of configurable parameters for Firefox is very small, especially compared to the flexibility provided by Microsoft for Internet Explorer

Conclusion
Integrating Firefox into Active Directory is a great progress. But the current level of the solution makes it more of a curiosity, since it will change it's functionality with every new build from FrontMotion. If Active Directory integration is merged into the main Firefox development track and properly developed, for instance for Firefox 3.2, it will be a great step for Mozilla against Microsoft.
Once corporations are confident that Active Directory support is properly adopted into the generic Firefox and is there to stay, I know a lot of administrators that will happily phase out Internet Explorer for Firefox.

Talkback and comments are most welcome

Related posts
TrueCrypt Full Disk Encryption Review

Protecting from Meddling Web Applications

The current trend of web2.0 (or AJAX) is to abstract all processing from the local computer resources and just present the final 'drawing' of the web application, which contains only forms or lightweight widgets that pose very low security threat. However there are a lot of software companies that are still sticking to some old school (read outdated and insecure) programming technologies for web applications, that can leave your security cracked wide open.

So, how do you protect from web applications that wish to meddle with your computer.

Example scenario:

A vehicle service company has created an online ticketing system for fast problem reporting and resolution. A rent-a-car company which uses the vehicle service needs to use the application for logging of faults to their fleet. At first use, the web application does not work on any computer at the rent-a-car company. After some analysis, the security administrator concludes that the web application requires to install an ActiveX control on the client PCs in order to work - a function explicitly denied by the security policy.
Since business comes before security, the rent-a-car managers decide that everything must be done in order for the service web application to work properly. Thus, the ActiveX control is set as trusted and everything is fine.

Two months later, the service company ticketing web server crashes. At the same time, during regular fleet inventory, the rent-a-car company concludes that 17 luxury rentals are missing and have not been seen for at least a week. The GPS locators of the cars are found at an abandoned parking structure connected to a car battery.

Suspecting the system administrators are in on the theft, the police brings in forensic teams that sift the system for incriminating evidence. They discover none, but find a trojan horse that tampers with database records in the ActiveX control downloaded from the web server of the vehicle service company.The vehicle service company is contacted for investigation and it is concluded that the web server is formatted. It crashed due to corruption of several system files on the web server on the day when the 17 cars went missing. The manufacturer of the Web Ticketing application is also contacted and his ActiveX control is analyzed. The original ActiveX control does not contain any foul play code.

After the incident, the rent-a-car company files a damages suit against the service company, and the vehicle service company fires the administrator for gross negligence.

Analysis:
The entire chain of events in this scenario is a simple case of non-core competence comedy of errors:
  1. Both companies have a completely non-IT core business, and as such are most likely to use the cheapest product on the market, as long as it works.
  2. Their security awareness is an afterthought.The rent-a-car company trusted a foreign application and installed it on their computers.
  3. The foreign application was downloaded from the Internet, and there was no way to confirm that the application is unmodified.
  4. At the same time, the vehicle service company hosted a web application using their resources without proper knowledge and implementation of security
  5. ActiveX as a technology is risky - it has no technological security - it just relies on the user's permission to trust and install itself. After that the applications have unrestricted access to anything the user has access to - even hardware (keyboard, disk drives, network...)
Conclusion and Recommendations:
There are simple and effective strategic steps to alleviate the risks of this scenario

If you are in a role similar to the vehicle service provider
  1. Focus on core competence and outsource the application hosting to a reputable IT hosting company
  2. When purchasing applications - add a functional requirement for minimal interference to the client side systems
  3. Request a periodical reporting on security of the hosted application from an independent source (auditor)
  4. Request that all code and information transferred via the internet to be signed by an code signing certificate issued from a trusted issuer.
If you are in a role similar to the rent-a-car company
  1. Have a strict security policy and don't allow foreign code within your network (create isolated tunnels, separate isolated stations or similar level of isolation)
  2. Request a periodical reporting on security of the hosted application from an independent source (auditor)
  3. Request that all code and information transferred via the internet to be signed by an code signing certificate issued from a trusted issuer

Talkback and comments are most welcome

Related posts

Information Risks when Branching Software Versions

3 rules to keep attention to detail in Software Development

8 Golden Rules of Change Management

Application security - too much function brings problems

Security risks and measures in software development

Security challenges in software development


Thrown in the Fire - Database Corruption Investigation

Analyzing an incident when the manufacturer claims that it's an operator error and the operator claims that it is an application error is one of the most daunting tasks of a security officer.
And this is a type of incident that the security officer will be called upon to investigate simply because the management needs an independent observer and has doubts both in the operator as well as the manufacturer. Here is what to do when thrown into the fire

Prerequisites

  1. Do not let the manufacturer's expert be the one that leads the investigation. If he insists to be involved, make it clear that this is your investigation and that he has to ask permission for and explain any action he wants done on the database and application during the investigation.
  2. Know a bit of SQL or bring someone that you trust that knows SQL.
Tools of the trade

  • Toad for Oracle and Query Analyzer
  • MS SQL Server Management Studio for SQL Server
  • Event viewer for Windows and
  • Syslog and text log files for Unix/Linux
  • Notepad, hi-res camera or screenshots for everything.
Incident Investigation Process


  1. Gather as much information as possible - even gossip!
    • Talk to the witnesses of the incident.
    • Establish who else worked with the application during the incident discovery
    • Document the events that lead to the discovery of the problem and their timeline
    • Document any data involved in the process - account numbers, exact names, values, currencies - anything that can be found in the database. Do this for both the clean and and corrupt data
    • Gather screenshots of the application of the events that lead to the discovery of the problem
  2. Establish a time interval of the incident
    • Choose a database backup closest to the time the incident has been identified and Request that a database restore be done and the users to verify that the restored database is in good condition.
    • If the database is 'good' then the incident occurred between that backup and now.
    • If the database is 'bad' repeat with an earlier backup
    • Repeat until you find the closest 'good' and 'bad' backups - the incident has occurred sometime in that interval
  3. If possible, try to reproduce the conditions of the incident
    • Starting from the known 'good' state - a non-corrupt database ask the users to repeat their activities
    • Observe/Film the user while performing the activities in the application
    • Run a profiler/logger type of tool while the users are working to capture all backend activities on the database
    • Follow through until the application is closed and all sessions are torn down - there can be a closing script that is a problem
  4. Identify key data repositories
    • Consult the documentation and captured queries if available to identify the tables that the corrupt data is kept in.
    • If there is no usable source, use trial and error: The tables are usually named in a logical manner related to their purpose - so match them to the statement of events to find which tables are relevant.
    • In order to confirm that the right tables are identified, find at least some of the documented data involved in the incident in these tables. Don't be disappointed if you miss at first - they MUST be somewhere!
  5. Look through the audit and/or the logs of the database to identify which updates or changes were made in the database.
    • This is a very problematic step - some applications and databases will not have any audit, or a small amount of audit.
    • But almost all applications have a form of application trail - a table or set of tables that logs the action to be or was done, mostly because a lot of application actions are dependent on each other so they need to create a unique identifier (key) in one table to be referenced further.
  6. Match the described timeline with the database logged actions as closely as possible.
    • Consult the witnesses of the incident during this process - tell them that you notice certain type of event at certain time - this reminder triggers memory - they'll remember more detailed actions of their work!Add log details and timestamps at each step of the timeline
  7. Discuss Observable Trail With Manufacturer and Users
    • If you find a proof of a bug or human error you're in luck. Write a report and recommend corrective and preventive measures.
    • Most likely, you'll find a gap in the events right where the incident occurred (interval of minutes) but the trail of events will indicate what was the next step: whether the program malfunctioned or the user made a flagrant error. Then you need to confront the manufacturer and users with the problem. Ask for a recreation of the actions with both parties present and with full logging. The log will give the actual event.
      NOTE: Non reproducible errors are possible - If the error cannot be reproduced, then that is a report also. But then you need to increase the logging level to maximal possible level until the problem resurfaces.

      Related Posts
      Tutorial - Mail Header Analysis for Spoof Protection
      Tutorial - Computer Forensics Process for Beginners
      Tutorial - Computer Forensics Evidence Collection
      The SLA Lesson: software bug blues

      Talkback and comments are most welcome

      5 Reasons to Consult Your SysAdmin for New Systems

      A lot of organizations isolate system administrators from new system implementations, lead by the premise that their admin teams need to focus on maintenance, and that they may not bring benefit to the implementation, especially when consultants are engaged to implement the entire new system.
      But always bear in mind that system admins have very specific insight that any project manager will find useful. 

      Here are the 5 reasons why organizations should always include your system admins in all phases of system implementation:
      1. SysAdmins know the infrastructure and the interactions between systems - every corporate IT infrastructure is a complex set of systems, firewalls, security rules and networking connections. The SysAdmin can provide invaluable information about what the new system will communicate to, under which conditions and by which rules - questions that need to be properly answered in any implementation.  
      2. SysAdmins know the utilized capacities of current systems -  introducing a new system is never self-sufficient. The new system will add load to the switching infrastructure, firewalls, can require additional licenses for monitoring systems and possibly database servers. All these prerequisites need to be addressed in a timely manner, so the full implementation does not grind to a halt at the last mile because there are no available ports.
      3. SysAdmins can assist in evaluating required capacity - Based on what is used in the current network, SysAdmins can provide very relevant observation on whether the offered hardware is appropriate in terms of processing power, memory and disk capacity usage.
      4. SysAdmins can provide fresh insight into possible risks in implementation - While the risk analysis is part of the preparation for implementation, SysAdmins can provide good input on possible risks in implementation - they know the users and usage patterns, the client systems and the entire environment. 
      5. SysAdmins need to be in the loop for the new element - The SysAdmins need to know and understand the new element in the infrastructure, so they can prepare to welcome the element - prepare capacity on related systems, read about the product, properly organize day-to-day maintenance tasks for the new system.
      Related posts

      Talkback and comments are most welcome

      Essential Management Semantics - Responsible vs Accountable



      I've had a discussion at the office about who is responsible for a certain activity. And as expected, the junior colleagues got into a discussion of who is more and who is less responsible for the activity. The Information Technology Infrastructure Library (ITIL) defines two distinct roles: 
      • Responsible and
      • Accountable

      If you open Websters dictionary (www.websters.com) and look up the adjective "responsible" you get the following description: answerable or accountable, as for something within one's power, control, or management
      If you do the same for "accountable" here is what you get: subject to the obligation to report, explain, or justify something; responsible; answerable.

      It is a common sense to assume that "accountable" and "responsible" are synonyms. But both in Management and IT their meaning differs slightly and that makes all the difference:

      Accountable is the PERSON (singular) who answers for the entire set of results in a performed activity or process.
      Responsible are the PERSON or PERSONS (singular or plural) who answers for the quality of a subset of tasks performed within an activity or a process.

      So, there can be many responsible persons for the proper performance of a process, but should ALWAYS be only ONE person accountable for the entire process. 

      Bonus Question
      Q: When something does not get done right, who gets blamed. The Accountable or the Responsible:
      A: The Accountable has the task to identify which Responsible is failing his job and take measures to fix the problem. In the long run however, if the problem is not fixed and the entire process fails, the Accountable will be called to answer.


      Related posts

      Talkback and comments are most welcome

      iPhone Failed - Disaster Recovery Practical Insight

      A lot of Disaster Recovery procedures are considered failed simply because they took longer then originally planned and documented. And a lot of these procedures take longer not because of poor equipment or incompetence. On the contrary, they take longer because the responsible people are focusing primarily on the effort to fix the problem. Here is a practical example:

      On Tuesday my iPhone failed. And since its warranty is long gone i decided to fix it myself. I finally got it fixed at Wednesday night.

      In my zeal to repair it, I forgot the first rule of business continuity - recover functionality within acceptable time frame. And for iPhone, just for any other mobile phone, the main functionality is TELEPHONY!!! I was unavailable for the most part of Tuesday and during parts of business hours on Wednesday.

      In the end, the problem was solved, and my iPhone is working again. But then all missed calls came raining down, and that kicked me back into reality, and gave me a real perspective of what I needed to do: find a low end replacement phone instead of meddling with low-level format, firmware flashing and DFU modes. That way, I would have been contactable, and be under much less pressure to quickly fix my iPhone.

      In perspective, the same behavior can be seen in many organizations during IT disaster recovery. Disaster recovery is organized and coordinate by IT people - mostly very capable engineers. And yet, a large number of Disaster Recovery actions are delayed by the effort of these good engineers focusing primarily on fixing the engineering problem - not fixing the business problem.

      In a Disaster Recovery situation, the timer of recovery is known as Recovery Time Objective (RTO). That is the time interval starting from the moment ot disaster in which operation must be recovered to limited but essential functionality.

      A good DR manager - regardless of his position and education does his work with a stopwatch. The time he can allow the engineers to try to fix the problem does not have a formal name so let's call it Fixing Time. It is the time difference between RTO and the tested time required to activate the Disaster Recovery systems.
      Once this Fixing Time passes, Disaster Recovery preparations must start. If the problem gets fixed before completion of DR system activation, all is well. If not, RTO has been met. Oh, and the engineers can relax from the urgency pressure and work on fixing the original problem for as long as it takes

      Back to my iPhone example - what was my timing? A phone RTO should be the recharge time - 2 hours. Getting a replacement phone is a walk to the store and buying the cheapest prepaid model or borrowing a spare form a friend - 30 minutes. So I needed to keep my cool, and try to fix the problem for only 1.5 hours before looking for an alternative. After that, I could have spent a week on the iPhone - no pressure to fix it fast.

      Related posts

      3 Rules to Prevent Backup Headaches
      Business Continuity Analysis - Communication During Power Failure
      Example Business Continuity Plan for Brick&Mortar Business
      Business Continuity Plan for Blogs
      Example Business Continuity Plan For Online Business

      Talkback and comments are most welcome

      Cloud Computing - Premature murder of the datacenter

      Last week Amazon announced it's new cloud computing service - The Amazon’s Elastic Block Store (EBS) . It's a remote storage service, with excellent storage/cost ratio which is even advertised as replacement for large storage systems of the enterprise. Naturally, the ever controversy seeking journalists hurried to declare time of death to the enterprise data center and included this view:


      Though most businesses are quite comfortable in using external utility
      services for electricity, water, and Internet access — and we even use banks to
      hold and pool our money with others “off site” — we are still largely unready to
      move computing off-premises, no matter what the advantages

      It is correct that certain elements are used as external utilities, but let's compare services from a realistic point of view
      • Electricity as a service - because everyone is entirely dependent on electricity, the grid itself is designed to be resilient, have fast fail over time, survive major catastrophic events at power plants or within the grid, and even re-route additional supplies from other countries if need be, at horrible costs but it does work! Oh, and for the simple case of a grid glitch, we'll spend a $500 on a UPS and another $5000 on a diesel generator and we're all set!
      • Data storage as a service - For data storage services, information is needed here and now - exactly like electricity. If we are to outsource our cloud information storage to a provider, that may be well and good as long as it works. However in the information security world, there are three key concepts. Our cloud data storage must guarantee commensurate levels of
        • Confidentiality - in cloud computing location is an ambiguous concept. So data will exist on different storage elements, at different physical locations, will traverse millions of miles of physical networks not related to or in any way responsible to the customer, as long as it's there. Who will guarantee that confidentiality is maintained? Oh, and I forgot - you ACCESS the data via the Internet. Whenever a confidentiality breach does occur it can always be blamed on your Internet connectivity and breach of security at the access provider, not the storage service provider
        • Integrity - will probably be maintained, since there are very simple ways of doing comparison and keeping a small subset of control information with each set of data - as long as fragments don't get lost, in which case we have a problem of...
        • Availability - in cloud computing information is everywhere, and gets collected and presented at the user's request. If for any reason this data cannot be reconstructed and verified it is lost. And again, the access to the information is through the Internet - which is not service with guaranteed availability, since it depends on international mesh network controlled by a multitude of independent entities. Unless you spend top dollar on dedicated data links nobody will sign a strong SLA for Internet access - it's impossible to achieve.

      But why don't we have a local backup, just like the UPS? Of course we can, it's known as an enterprise data center!

      While there are strides made in the right direction of cloud computing it's current level of usability is restricted by the "best effort" concept of the entire network on all sides. So the users of cloud computing are the ones that find it acceptable to:

      • have delays in access to information
      • have some data lost and
      • information leakage will not make a significant impact.

      In the meantime, the enterprise data centers are still humming strong

      Related posts

      Datacenter Physical Security Blueprint

      3 Rules to Prevent Backup Headaches

      Talkback and comments are most welcome

      Fedora Servers Compromised

      According to this announcement from yesterday, Fedora servers were compromised.

      Here is a scary part of the announcement:

      One of the compromised Fedora servers was a system used for signing
      Fedora packages
      That particular server had very little to do with Internet, and should have been properly isolated, even on a completely separate network from Internet accessible servers.

      So, the readers should be careful with the current Fedora distro and packages download and install. I would wait for the next official release.

      This event goes to show that large companies, regardless of industry can make poor security choices. And because large companies with high profile are a great publicity target, these poor choices are easily found by hackers

      Anyway, respect to RedHat for the announcement. A lot of companies will simply sweep such an event under the rug.

      Related posts
      Portrait of Hackers

      Talkback and comments are most welcome

      Competition Results - Computer Forensic Investigation

      The Computer Forensic Investigation Competition is closed, and here are the results

      What was there to be found:

      • Tshark sniffer - part of the wireshark suite in /moodle/enrol/paypal/db
      • NetCat tool for backdoor creation - renamed as MyTool.exe - in /moodle/auth/ldap
      • An MP3 of Sergio Mendes & Brasil 66 - Mas Que Nada renamed as html document - in /moodle/auth/imap
      • A TrueCrypt rescue disk ISO renamed as MyDoc.doc in /moodle/lib/geoip/Documents/
      • OSSTMM Penetration Testing Methodology with penetration details in deleted file osstmm.en.2.1.pdf in /moodle/enrol

      Finding the above was suffucient to win the competition. Alternatively, instead of OSSTMM you could find the below two items

      • A decoy metasploit developers guide pdf in /moodle/lib/geoip/Documents - actually, that document has nothing to do with direct hacking unless you discover the
      • metasploit framework remnants of a deleted metasploit framework in /moodle/lib/geoip/Documents

      Who did the investigation (in chronological order of reporting the findings - earliest first)

      • Lawrence Woodman - Found 4 incriminating pieces of evidence. Missed the real penetration tutorial and focused on the dummy - Metasploit.
      • Tareq Saade - Found 4 incriminating pieces of evidence. Missed the real penetration tutorial and focused on the dummy - Metasploit.
      • Bobby Bradshaw - Found 3 incriminating pieces of evidence. Missed both and the dummy penetration testing documents (Metasploit and OSSTMM) and missed the Truecrypt Recovery CD Iso
      • Daniele Murrau - Found all incriminating evidence. The utilized toolset is Autopsy as part of Helix distribution
      • Lesky D.S. Anatias - Found all incriminating evidence. The utilized tollset is PyFlag and Sleuthkit

      Other Participants - did not qualify for final review because they did not send details of methodology nor findings (no particular order)

      • Phil (no last name) - reported finding 2 pieces of evidence, but did not send methodology used nor details of findings
      • snizzsnuzzlr (obvious nickname) - reported finding 5 pieces of evidence, but did not send methodology used nor details of findings
      • Fender Bender (obvious nickname) - reported finding 3 pieces of evidence, but did not send methodology used nor details of findings
      • Sniffer (obvious nickname) - reported finding 2 pieces of evidence, but did not send methodology used nor details of findings


      And the winner is - Daniele Murrau

      Here are his conclusions and methodology as a downloadable PDF

      We are also naming two honorary mentions

      • For speed - Lawrence Woodman, who produced a nearly full analysis in a tremenduosly short time, but most probably missed the OSSTMM and the metasploit remnants because he was in a hurry
      • For thoroughness - Lesky D.S. Anatias, who discovered ALL evidence, including the metasploit remnants

      Related posts
      Competition - Computer Forensic Investigation
      Tutorial - Computer Forensics Evidence Collection
      Tutorial - Computer Forensics Process for Beginners

      Talckback and comments are most welcome

      Designed by Posicionamiento Web