Hardware Security Module for Dummies

Following my publication of a the design for Personal Data Protection in a database , I have received emails asking to elaborate on the proper protection of Secure Keys in the infrastructure.



I'll describe it through the following example:
You want to secure some information, therefore you encrypt it. The encrypted data is stored on some computer readable/writable media. For the entire excercise to be useably fast the actual encryption and decryption is done through a computer program, residing on a general purpose computer. This program needs a decryption key in order to recover the data into a useable form.
If you need the encrypted data only for archival purposes, and acces it only very infrequently, you can set up a situation where you have an isolated (non-networked) computer system in a secure room, doing the encryption and decryption and keep the key in a separate vault in a sealed tamper-evident envelope. When the time for decryption comes, an authorized person opens the vault, breaks the seal on the envelope and types in the key in the decryption program.

But in a more realistic situation, the encrypted data will be needed on a daily basis, so it will probably be kept inside a database or a file on the computer. This computer will need to be networked, and some form of server application will be responsible for decrypting and presenting the data to the user. In this scenario, having the decryption key in a vault serves little purpose, since someone will need to type it in at least several times per hour. So, the solution would be to keep the key somewhere inside the computer. Here's the catch: Any secret kept within a computer is actually very very insecure. It is kept in a file on the file system, or in RAM memory. Both locations are accessible at least to all administrators and can easily become subject to a hacker intrusion. Encrypting the key does not improve things since again, there is a decryption key which is available in cleartext. The image below presents a computer system in which the secret key is stored in RAM or HDD. Both are fully accessible by the OS, and can be dumped to be analyzed and the secret key can be retreived.





What is needed is a way to have the key in a safe location, but to be able to obtain it's services for decryption at speeds at which the computer operates. So, we need a tamper evident and secure vault inside the computer. This is the role of the Hardware Security Module (HSM). The HSM is a hardware device which protects its content, and never reveals the content in an unencrypted form to the host computer, and the host computer cannot access and address any of the storage memory of the HSM. If any form of attempt to tamper the HSM is detected, it can be logged or the HSM can self-destruct the secure contents, depending on the security level of the HSM.
The HSM provides services through an API to generate and import secret keys and to encrypt/decrupt data with those keys. The image below presents a computer system in which the secret key is stored in the protected HSM memory. The OS can request a service of decryption of some data, but the secret key is never directly accessible



The HSM modules come in a form of a PCI card or an external device which connects via USB/SCSI/RS232 interface
It's physical tamper resistance is assigned as a level of FIPS-140-2 validation, with levels 1 to 4

  • Level 1 - Imposes very few requirements. In general, all components must be "production-grade" and various obvious kinds of insecurity must be absent.
  • Level 2 - Imposes Level 1 requirements and adds requirement for physical tamper-evidence and role-based authentication.
  • Level 3 - Imposes Level 2 requirements and adds requirement for physical tamper-resistance, identity-based authentication, and a physical or logical separation between the interfaces by which "critical security parameters" enter and leave the module, and its other interfaces.
  • Level 4 - Imposes Level 3 requirements byt adds more stringent physical security requirements as well as robustness against environmental attacks.
In the modern world of information security, one should choose a HSM with a minimal level of FIPS-140-2 Level 3. Also, don't just trust the salesman that their HSM is validated at certain level. NIST maintains validation lists for all of its cryptographic standards testing programs (past and present). All of these lists are updated as new modules/implementations receive validation certificates from NIST and CSE.


Related posts
Personal Data Protection - Anonymizing John Doe


Further reading
FIPS 140-2
NIST Validation lists


Talk back and comments are most welcome

Application security - too much function brings problems

In the past 5 days i came across 3 examples which proved that too much functional complexity can backfire in terms of business competitiveness and security:

  • The car example - I read an AutoBild 100,000 km test of the BMW series 7. Their biggest complaint was the iDrive system (no relation to Apple). The idea of BMW was that a single computer interface will replace the arrays of buttons and dials on the central console of the dashboard (from radio to suspension setting). The initial version of the iDrive system was so complex that it became a nightmare for the driver to use it. The end result-a very expensive car that is a difficult to use, and sometimes even dangerous since the driver is focusing on the iDrive instead of the road
  • The phone example - My 2.5 year son loves to play with my cell phones. While usually interested in fiddling a phone for around 2-3 days before he got bored, he is still constantly playing with the iPhone. What is even more fascinating, he is quite adept to actually activating and deactivating the iPhone's features without any help from an adult except for the initial showing. Although not an actual user of the iPhone, he knows how to use most of its features.
  • The software example - I was doing a review of an application design for a Engineering ACME company. This application has a feature which enables the user and manager to follow what actions are performed, and to revoke any of his commands even out of sequence, provided they are not interdependent. While a possibly good idea in the long run, the implementers placed the invocation of this feature the action history in many places, including one wrong place. With this wrong place, they caused a situation in which action history can be invoked even if the user is logged off, and a command for cancellation or re execution can be issued. While no harm can be done at the moment since there are no credentials to complete this change, the implemented mechanism of the action history cached the issued command and completed it upon a successful log-on of the user whose action is to be changed.
These three examples lead to the following conclusions:
  1. It is rarely good to choose a solution only based on the function set it offers.
  2. Bad implementation of functionalities can make a very expensive high quality product to a situation where people will be afraid to use it
  3. Bad implementation or too much functionalities can make a product under perform, or cause a situation where people will be afraid to use it, thus not using the features they paid for
  4. Bad or too fast implementation of functionalities WILL produce security issues which can even pass undetected into production
  5. Good implementation will enable fast adoption of any product, no matter how inexperienced a user is
  6. Simplicity of the user interface does not mean it is based on simple or inflexible engine.

The users, especially corporate users buy on number of functionality, not on the quality of implementation. But a complex poor implementation WILL backfire.

It is only a matter of time before it does. The later it does, the more problems the software developer will have to fix the problems, if he can fix the problems at all!

The iPhone and the iPhone community is an excellent tutorial for system architects and software engineers: Have a strong core, and develop functions one at a time. Don't just overload the system with functionality.


Related posts
Security risks and measures in software development
Security challenges in software development

Talk back and comments are most welcome

Corporate Skype Wishlist

I already blogged about the things that make Skype a poor choice for a corporate environment. But, facing reality, the penetration of Skype for the home user is excellent, and a whole lot of persons are quite familiar with the interface and the usage. So, if there is a way to make Skype more corporation friendly, it becomes a very easy tool to be adopted by the employees.
Now there is talk that Skype may be sold

Without knowing what will be the business model of the new possible owner, here is a wish list that will make Skype the killer of all corporate IM applications.

  • Enable autonomous functionality - Effectively, the organization should be able to run skype in an autonomous mode, without contact to outside skype servers. This would probably require integration to Active Directory or some other Directory Service for user authentication.
  • Enable administrator controlled assignment of SuperNodes and RoutingNodes - Each Skype program can become (upon it's own decision) a SuperNode or RoutingNode, and assist in communication between the other Skype systems. This is highly undesirable in a corporate network infrastructure where every bit on the network should be accounted for. A manual assignment or disabling of these roles should be available to the administrator
  • Create possibility to define which user can access which functions - Every service within the Skype service set should be subject to configurable enabling or disabling. This includes video, voice, file transfer, chat and anything other they might think of
  • Create an Audit Log - Create a central audit log of all configuration changes, logon and logoff events, logon errors and chat conversations
  • Throw out the automatic logon check box and disable password saving - NOBODY should log on automatically. In a corporate environment, all systems should prevent forgetful employees from logging on automatically to anything - including the communication package.
  • Create Controlled access to internet Skype - Create the ability to establish communication and make/receive calls to users of the standard Skype, but through a definable and controlled gateway and only to users for which this function is approved and enabled by an administrator

To the new owners of Skype: If need be, set a price for this product, but please consider the currently untapped possibilities of the world where Microsoft Live Communication Server (LCS) rules.

Related posts
Is Skype a good Corporate Tool?

Talk back and comments are most welcome

Rebranding a free E-mail domain - strategic blunder

I am using several free e-mail services, mainly for reasons of insufficient quotas. But for more then 7 years, my primary e-mail address hasn't changed.

It is a free service hosted by a local telco provider, with only 100 MB quota and relatively poor spam protection. I have maintained it simply because a huge number of my contacts and registrations on different sites on the Internet is bound to this address. It is like maintaining a mobile phone number on an expensive mobile telephony provider simply because it is the number that everyone knows.

Yesterday i received a memo by the telco provider, that it will go through a rebranding process. This rebranding process will include the change of the domain name of the free e-mail service. All incoming emails to the original domain will still be received, but all outgoing e-mails sent from a webmail interface will be with a sender@new-domain.com. And all this change will happen in 2 weeks!

The idea of the telco provider is that using this change in domain name they will immediately raise the profile of the new brand.
Naturally, you give something for free in order to gain something else (customers of other service, media attention, brand recognition) . In a free service, there are 2 types of customers: the casual users, which just registered because it is free, and comprise 80% of all users, and the power users, which actually use the service as their primary service, and comprise the remaining 20%. It is the power users that frequently circulate e-mails with a certain brand thus promoting the brand.
It is the power users the operator wants, not the freebie hunters.

But here's the rub: lets analyze what a power-user of a certain e-mail service does

  1. He/She is subscribed to automatic services or mailing lists. All automatic services are only validating a command if it is received from my original registered e-mail address.
  2. The partners and friends of the power user usually have e-mail rules in their mail clients which manage e-mail based on the sender address.
  3. It is not a small effort to attempt to inform all the persons with which one communicates of the e-mail domain rebranding. Such information will again be sent through e-mail, and may or may not be noticed and read on time by the person, leading to unread or lost e-mail.
  4. It is an even larger effort to try to find all services to which one is registered, and re-register the new e-mail address. For some of these services the user has forgotten the passwords, for others, the process of re-registering is horribly complicated in order to protect the members - especially for organizations like chamber of commerce, or guild organizations.

So here is what I expect to happen:

  1. The casual 80% will most likely ignore the entire event, or adopt it without too much fuss, but they will use it as much as the previous one - almost not at all.
  2. The power users 20% will accept the inevitable, but will analyze which e-mail service to communicate to the world and use in the future. As i said before, the service offered by the provider is a joke compared to Yahoo or Gmail.

Now, i understand that this is a free service and as such, the operator has the right to rebrand it. However, they have created a scenario similar to a mobile telephony provider telling you that you MUST change the phone number due to some rebranding. Faced with that fact, a lot of users will flock to a better/cheaper service. The churn of the power users will be significant to say the least.

I do not have an insight into the product and brand management strategy of the telco operator, but it looks like a very poor strategic decision.

Talk back and comments are most welcome

DHCP Security - The most overlooked service on the network

DHCP Service is the service which a lot of you use, whether you are aware of this or not. That is the service that delegates an IP address to hosts on the network when they are set-up for auto configuration. This service is extremely frequent on large corporate networks, but with the advent of Wi-Fi in So-Ho networks the DHCP service becomes more and more present in these environments.

Short description of DHCP
The Dynamic Host Configuration Protocol (DHCP) automates the assignment of IP addresses, subnet masks, default gateway, and other IP parameters. The DHCP protocol operates at the MAC sublayer of the Data-Link layer of the TCP/IP protocol stack. The only distinguishable identifier of the client computers at this level is the network interface MAC address.

When a DHCP client connects to a network, it will send a broadcast query trying to discover DHCP servers. Upon receiving response from the DHCP server, the client will send a broadcast requesting necessary information from a DHCP server.
Upon receipt of a valid request the server will assign the computer an IP address and other configuration parameters such as the subnet mask and the default gateway, and send these parameters to the requesting client. The DHCP server is configured to manage and lease a pool of IP addresses within a specific address range, according to the routing settings of the network, and the number of clients on the LAN segment.

The assigned parameters are 'leased' from the server, and when the 'lease' expires, the client must release assigned IP address and parameters effectively unconfiguring the network interface. To prevent this, the client will try to renew the 'lease', usually starting with the renewal requests at the half of the lease period.


Vulnerabilities that can be exploited in a DHCP service

  1. Rogue DHCP server - a very dangerous attack and a very easy one to set up. It involves creating your own DHCP server and connecting it on the network, with the intention of sending your parameters to the clients. The attacks of a rogue DHCP server to the clients can range from a simple denial of service (issuing non-routable ip addresses or wrong DNS information) to the very subtle issuing of rogue DNS server. With this second attack the attacker will set-up the clients to use his DNS server instead of the standard corporate one. His rogue DNS server can be then configured to direct users to fake copies of some sites, for the purpose of credential collection.
  2. DHCP denial of service - a simple attack to perform, but not too critical if used by itself. It involves placing a specially configured attack DHCP client which will request many DHCP leases with spoofed MAC addresses, effectively 'draining' the available pool of IP addresses from the DHCP server. If this happens, normal clients will not be able to obtain an IP address and use the network. This attack is usually combined with the previous one, in order to prevent the regular DHCP server from responding to the requests.
  3. DHCP routing attack - if a rogue or a compromised DHCP server returns the IP address of the hacker's machine as the default gateway, then all traffic from the local network will pass through this machine, and can be subject to traffic sniffing and reconstruction of TCP sessions, thus revealing user names, passwords, personal information etc.. And it is very very easy to set-up a computer to be a NAT router and forward all communication to the regular gateway, so no one will actually see any change on the network
  4. Compromise of the corporate DHCP server - the most difficult attack to perform and the most dangerous. It is quite difficult to achieve since the hacker needs to compromise an actual corporate server, which is often well protected by hardening and IDS systems. Once penetrated, the compromised DHCP server offers the entire set of attacks on clients described above, with the added benefit that this attack is very difficult to identify. There are no rogue DHCP servers on the network that the Net Admin can scan for, and at first glance business goes on as usual.

Securing of DHCP service

Securing the DHCP server is very difficult because it is designed to operate at a very low level, there are very few security controls that can be implemented for it:
  • Manually set-up DNS IP address on each client to a trusted DNS IP address - DNS servers are rarely changed so this is an excellent protection against rogue DNS server. On the other hand, it is a relatively cumbersome process.
  • Harden the operating system and procedures of the DHCP server and the DHCP relay agents - implement all available security patches; change all default passwords and maintain a rule to have complex and frequently changed passwords (This includes SNMP). Disable all unnecessary services and user names.
  • Implement procedural rules that ban the connection of outsider laptops on the corporate network - Write the procedures, and scan for rogue PC's on the network at frequent intervals; implement regular unannounced scanning for rogue DHCP servers on your network.
  • For Wi-Fi networks, use WPA2 encryption and perform patches and updates on the access points and routers.

Related posts
5 Rules to Home Wi-Fi Security

Further reading

DHCP service description on Wikipedia

Talk back and comments are most welcome

Reminder - Information Security and Strategy Carnival

Reminder - only 4 more days to submit your blog post on the Information Security and Strategy Carnival Please submit posts on the following topics:

  • information strategy
  • information security
  • network security
  • database security
  • data security
  • vulnerability analysis
  • penetration testing

The carnival will be published on the 1st day of every month. We will accept only original texts, which present a strategic opinion, review of event or product, or a HowTo on a relevant topic,
Please send submissions by the 25th each month to e-mail:
shortinfosec _at_ gmail dot com
or submit them through the Blog Carnival Web Portal http://blogcarnival.com/bc/submit_3975.html

The Cost of Datacenter Physical Security Blueprint

I have received a couple of e-mails about the Datacenter Physical Security Blueprint with comments that my blueprint is too movie-like, and that it is way too costly to implement.
So i did a little shopping around, and i requested budget prices for every element of my Blueprint (budget prices are usually higher then purchase prices, since they are a non-obligating quotes for budget estimation).
Here is the math. All prices are in US dollars

Security equipment

  • 9 CCTV cameras with infrared sensors - 130$ a piece = 1,170$
  • 8 Glass break sensors - 45$ a piece = 360$
  • 8 Motion Sensors - 30$ a piece = 240$
  • 4 KeyCard readers (with combined electronic/mechanical lock and open/closed status sensor) - 150$ a piece = 600$
  • 2 KeyCard readers with Keypad (with combined electronic/mechanical lock and open/closed status sensor) - 160$ a peice = 320$
  • 1 Biometric reader = 700$
  • 1 Alarm controller = 600$
  • 1 Access/Keycard reader controller = 400$
  • 1 CCTV recorder = 4,000$
  • 2 CCTV monitors (one on reception and one in SysMonitoring) - 500$ a piece = 1,000$
  • Cabling and infrastructure = 1,000$
-------------------------------------------------------
  • Security equipment Total = 10390$
Dead man door

Option 1 - A full turnkey solution, with internal scale for person measurement and bulletproof doors with glass windows capable of stopping 7.62 caliber ammo (Think AK47) = 12,000$

Option 2 - A integrator made solution, with same capabilities as Option 1 but takes more space = 8,000$

-------------------------------------------------------
  • Grand Total = From 18390$ to 22,390$
Last time i checked, that was the price of a Ford Fusion (US vehicle) or a Honda Civic (European vehicle).

Related posts

Datacenter Physical Security Blueprint

Talk back and comments are most welcome

Datacenter Physical Security Blueprint

A very important aspect of Information Security is physical security. A significant amount of security incidents are found to be performed utilizing some vulnerability of the physical security.

So, here is a set of rules to create a blueprint of physical security of a IT department and data center for a company.
  1. The system room must not have windows. Ideally, it should be in the center of the building.

  2. All equipment that is not used must be stored in dedicated storage space, away from production environment

  3. All high security spaces should be monitored by CCTV cameras.
  4. Access control zones must be implemented, to create a security barrier as well as provide a log of access activities. These are created by doors opened by electronic key cards or multiple-factor authentication.
  5. All windows should be fully tempered, and equipped with a glass break sensor connected to a central alarm system
  6. All spaces that don't have 24/7 access should have motion sensors connected to the central alarm system.
  7. The design of the environment should enable technical service personnel to operate with minimal risk of unauthenticated access to data
  8. All alarm events and CCTV control should be under maximum security but should NOT be accessible by IT personnel
  9. Paper, optical and magnetic data carriers should be handled in a controlled environment, and properly destroyed prior to discarding
  10. High security environment should always implement multi-factor authentication.

The following image presents a concept for an IT department and System room environment that follows the presented set of rules:

The set-up of the environment is the following:

The reception area is the only way to access the entire floor, and everyone accessing this space is recorded on the CCTV camera. The access to the rest of the floor is restricted by an key card controlled door.

The Communication Room is also in the reception area, and it is accessible by a key card and PIN controlled door. It houses access panels where the communication providers (Telecoms, Internet, VPN etc.) terminate the purchased links. This is the last point where a representative of the telco providers can access to configure connectivity. The comm room has to be opened by an authorized System Administrator, so the telco provider's person is always escorted by an authorized person.

All the corridors in the space around the data-room are under CCTV surveillance

All offices have windows made of tempered glass that cannot be opened and are equipped with motion sensors which activate after 7 PM.

Support center which is manned 24/7, the toilet and the equipment storage room are the only rooms without motion sensor. These spaces can be used 24/7 so there is no point in placing motion sensors.

All documentation photocopying and destruction is performed in a dedicated room equipped with proper devices (shredder, degausser).

Dedicated storage space is used to store all unused equipment, which is accessed by a key card controlled door and is also monitored by CCTV.

The data-room is central to the floor, and has strengthened walls (Blue walls). The data-room is divided into two segments:

  • Pre-system space - this space is accessible via a dual key card door, which opens only when two persons use their key cards simultaneously. The Pre-system space contains the supporting infrastructure, which is placed outside of the system space to minimize risks of battery or coolant leaks, and to allow service personnel to access and service this infrastructure without having access to the actual servers.

  • System space - this space is accessible via the dead-man door, which is actually a very small corridor (only fits one person at a time) with two doors at the end. If one of the doors is open the other is automatically locked. In order to pass through the dead-man door, one must pass a multi-factor authentication: He/She needs to present his key card (something he/she has), type in the corresponding PIN (something he/she knows), and after entering the dead-man space, he is measured to verify the stored weight of the person, and a biometric verification is performed - retina or fingerprint (something he/she is).

The system space is under constant CCTV surveillance, and it also contains a separate small electronically locked space where the security controllers reside, to isolate these controllers from the SysAdmins.

Download the full resolution blueprint HERE

Related posts

5 Rules to Home Wi-Fi Security

Talk back and comments are most welcome

Tutorial: Making a Web Server

I was contacted by several readers of my previous post, Creating Your Own Web Server, with comments that it's lacking an actual tutorial on how to create the web server.
Again, I am very weary of creating a tutorial that will favor only one web server or platform, so I am including a generic checklist that covers the creation of any web server, regardless of platform.

STEP 0 - Find a computer that will become the web server. For excersise purposes, an older PC lying around or a Virtual Machine is more then sufficient.

STEP 1 - Install an operating system on the designated Web Server computer.

  1. Operating System Elements - Minimize the number of other services. If possible, avoid GUI installation, and leave only the Web server service, and the secure remote management service of your choice (NOT TELNET).
  2. Storage Volumes - I recommend to create a designated volume for web site hosting, which is separate from the system volume. This helps if you need to set quotas, permissions, and when performing backup
  3. IP address - set a fixed IP address on a LAN network interface. This is the primary IP address that you will use to contact the Web server once it is installed. If you don't have any LAN adapters in the PC, don't despair! There is always the loopback interface, which always has the address of 127.0.0.1. Make sure that the interface on which you set the ip address is active(up). The simplest way to do this is to connect it to a hub port. The loopback adapter is always up, so you don't need to bother with it.
STEP 2 - Install your favorite web server software (if not already added in the OS install).
  1. For windows server, you can choose to activate the built-in Internet Information Server (IIS), or install another Web server for which a Windows version exists.
  2. For the Linux/Unix flavours, you can't go with IIS, but the Apache and lighttpd web servers are always available, among others.
    NOTE: There is always the option to compile the web server from source code, with appropriate tuning of the executable. For your first few web servers, avoid this option and go with a precompiled and packaged binary (RPM package for CentOS/RedHat/WhiteBox and DEB for Ubuntu/Debian)
  3. Add support for appropriate server-side scripting languages for the web server. Without this, the web server will be capable of serving only pure HTTP code, and will not be able to process PHP, Ruby, ASP etc. For Windows, ASP support is built-in into the IIS. For all other web servers on any platform, appropriate binaries (packages) need to be installed for scripting support.
    NOTE: In case the installer of these binaries isn't capable of integrating the support into the web server configuration, you have to add it manually into the Web server config files

STEP 3 - From the web server, open a browser and type in the IP address you set on your PC or just use 127.0.0.1. You should see a default page informing you that the server is up

STEP 4 - Set up your own content. This means placing the content of the appropriate directory which is the 'root' of the web server site. Each web server is installed with one default web site (usually called....drumroll please....'default').

The usual filesystem path for Internet Information Server is C:\inetpub\wwwroot, and for a linux based apache is /var/www/htdocs. If you are installing the AppServ package on Windows, the default path is C:\appserv\www\

And that's it, you have a bare-bones working Web server. Regretfully, describing further details of advanced set-up will take wey too much time and space to cover all relevant topics, but you are welcome to contact me for individual questions and advice at shortinfosec _at_ gmail dot com.

Related posts
Creating Your Own Web Server
Web Site that is not Easy to hack - Part 2 HOWTO
Web Site that is not that easy to hack - Part 1 HOWTO

Talk back and comments are most welcome

5 Rules to Home Wi-Fi Security

The philosophy of security is to strike the delicate balance between cost of protection and usability. Making something very secure is very expensive, but making something very usable means that the bad guys can use it.
The same philosophy goes for a hacker attack - the cost of the attack should always be less then the value of the prize.

Here are 5 rules that maintain a very reasonable level of usefulness of a home Wi-Fi network, while increasing the cost of an attack to the hacker beyond the value of the prize.

  1. Always choose a non-default non-broadcasting SSID - this will not stop a more efficient attacker, but it will avoid a good number of script-kiddies). A good name is one which contains both letters and numbers, and cannot be deducted from the personal info of owner of the network.
  2. Always set-up the strongest possible encryption - Choosing the strongest encryption available is always a strong attack mitigating factor. For home users with 802.11b/g LANs, at the moment it is WPA2 Encryption.
  3. Maintain password complexity and change it often - Always set-up a complex password to WPA2, at least 8 characters long, containing at minimum characters and numbers that cannot be deducted from the personal info of owner of the network. Make a habit of changing it at least once every three months.
  4. Maintain minimal possible range of the Wi-Fi signal - If the Wi-Fi device permits it, reduce the Wi-Fi radio signal strength to the minimal useful strength - if someone needs to stand in your front porch in order to hack in your Wi-Fi network, they will most probably go elsewhere.
  5. Treat the Wi-Fi network as hostile - Maintain an active personal firewall on all computers accessing this network as if they will be accessing a open Hot Spot. If not using the network, deactivate the Wi-Fi radio on the computers.
NOTE: The value of the prize is a very relative item. While these rules are quite effective for a home network, without any business or highly valuable data, they are not at all effective for securing a corporate Wi-Fi network. For these networks, the value of the prize still is by far greater then the costs and risks needed to overcome the obstacles presented in this post.

Further reading
WPA and WPA2
Password Strength

Talk back and comments are most welcome

Creating Your Own Web Server

I got a question, What do i need to I create my own web server?
At first it seemed like a very curious and redundant question, since web hosting is already quite mature, and there is a wealth of both free and commercial web hosting services to choose from. But, i am compelled to follow through on this topic, since my beginnings are in an ISP, and there are still a number of good reasons to have your own web server.

First, a short definition: In the simplest form, a web server is nothing more then a program running on a computer, which accepts requests via the HTTP protocol and returns the requested web content (HTML, javascript, flash, video, audio...) to the requesting browser for rendering. More often then not, part of this content is kept in a database.
So in essence, all you need to run a web server is a computer, an operating system, a web server program and a database server program.

This is the spot where the simple things end, because now you need to ask yourself, what kind of computer, what operating system, what web server software, what database... But in order to answer these four questions, you need to ask more questions. Here are some of them, in no particular order:

  1. Do you want your server to survive a failed power supply without noticeable downtime?
  2. Do you want your server to survive a failed hard drive without noticeable downtime?
  3. Do you want to perform backup of the web content on the server?
  4. What type of web application will you run? (PHP, ASP, .Net, PERL, Ruby...)
  5. Do you want to use free or commercial web server software?
  6. Do you want to use free or commercial database server?
  7. Which database are you prepared to use and maintain (install, tune, manage, progam for...)?
  8. Which operating system are you prepared to maintain (install, tune, manage)?
  9. Is the chosen database server supported on any of the chosen operating systems?
  10. Is the chosen web server supported on any of the chosen operating systems?

As you can probably see, all these questions in some way adress the purpose of the created web server. To answer the original question, here is a summary of technical requirements for a web server with different types of use:

  1. For tinkering - A web server used only for tinkering at home should be no more then free software installed on your PC.
  2. Beginner development - A web server used for beginner development should be a virtual machine or a small dedicated pc, with free or some commercial software, with some simple means of backup like a DVD-RW.
  3. In-house development - A web server used for in-house development should be a dedicated PC or a small server, with free or some commercial software, and some means of backup like a DVD-RW or a DAT/LTO tape drive.
  4. Service Providing - An ISP web server should be a dedicated server, with redundant power supply and hard drives, a professiona backup system (robotic drive enclosures and backup server) and appropriate software.

The web server i am currently using for testing and security review purposes is the AppServ (http://www.appservnetwork.com/) running on my WindowsXP laptop. It delivers an instantly functioning Apache web server with support for PHP, and an accompanyig full-blown MySQL server, and a simple but sufficient SQL server management interface (PHPMyAdmin)

For advanced or commercial use, the list of software products is huge, but here is my shortlist, with appropriate links to software:

Talk back and comments are most welcome

Information Security and Strategy Carnival

I am proud to announce that the ShortInfoSec Blog will be hosting a regular carnival on the following topics:

  • information strategy
  • information security
  • network security
  • database security
  • data security
  • vulnerability analysis
  • penetration testing

The carnival will be published on the 1st day of every month. We will accept only original texts, which present a strategic opinion, review of event or product, or a HowTo on a relevant topic,

Please send submissions by the 25th each month to e-mail:

shortinfosec _at_ gmail dot com

or submit them through the Blog Carnival Web Portal http://blogcarnival.com/bc/submit_3975.html

Security risks and measures in software development

Following up on my post about security challenges in software development , i would like to present the risks that arise from these challenges, as well as short introduction on the preventive measures to mitigate such risks.

Product related risks

  1. Security flaws of the deliverable product – the most feared of risks and usually one with most dire consequences. The product THE principal source of reputation and income for the company. At the same time, the product is the tool that a customer uses to manage his information and data. A security flaw in the delivered product can result in loss of integrity, confidentiality or availability of customer’s information. Any one of these results would mean loss of client, loss of reputation and even legal action against the development company.
  2. Security flaws of the maintenance and support methodology – This risk takes on two forms
    a) INSIDER FACTOR – a security breach at customer's premises by an employee of the software development company involved in the maintenance process.
    b) OUTSIDER FACTOR – a security breach by an outside attacker who gained access to the customer’s premises by compromising the network infrastructure of the software development company
    It is quite clear that in this risk, the insider factor carries most of the risk weight. It should be duly noted that in this risk, the responsibility is mostly shared with the customer, since the customer should also implement security measures to mitigate and hamper such a risk.
  3. Security flaws of the delivery method – the third level of risk in the product. Given that all is perfect with the actual developed product, improper delivery can expose the product to possible tampering by “man in the middle”. This tampering, even if later proven to have happened outside of the development company, would not clear the development company of all wrongdoing, since the creator of the product didn't perform analyze the aspects of risk in transit.

Operations related risks

  1. Security Flaw in technical infrastructure – a risk which can cause great amount of problems, but which is easiest to identify, albeit sometimes expensive to remedy. A security flaw in the infrastructure can result in:
    a) Access, theft or intentional corruption/destruction of business critical data or information by employees
    b) Accidental loss or corruption of business critical data or information
    c) Outside hacker attack
  2. Security flaws in operations practices – a risk which is can cause the same results as the previous point, but is much more difficult to identify, but usually much cheaper to remedy, since it requires change in procedure, not capital expenditures

Information security corrective measures

To mitigate the risks presented in this post, the following overall measures should be developed and implemented. The description of these measures merits the attention of a dedicated post, and they will be treated accordingly. Insofar, here is a brief summary

  1. Top management must accept the philosophy of information security and actively sponsor, support and promote security. Also, they must be the first to fully adhere to all defined security procedures and rules.
  2. The software development company should define precise guidelines for security in operation, development and maintenance, supported by top management:
    a) Security in the product must be set-up and implemented from the initial design and architecture. If this isn’t the case, security flaws will be abundant, and security patching will become a never ending firefight
    b) The infrastructure and privilege levels within the company need to reflect security policy
    c) All security incidents must be tracked from start to end, documented and communicated to appropriate levels within the company.
  3. The employees must be regularly reminded that information security is one of the basic missions of the company; A regular security awareness and training program must be instituted for all employees, starting with employment and ending with the exit interview

Related posts

Security challenges in software development

Personal Data Protection - Anonymizing John Doe

Talk back and comments are most welcome

Security challenges in software development

With the time i spent at Medic ACME gave me an insight into the workings of a rising software development company. All the items i am presenting here are already presented to the Medic ACME management, as Pro Bono work on my other engagement.
So, with their consent, i would like to present my conculsions. In the rush to achieve a good brand and reach the heights of profitability, any typical software development company has the following characteristics:

Get things done mentality – “This will be the largest contract in the history of our company. We must be prepared to deliver in 2 weeks/2 months. So get it working ASAP. A variant of this monologue is very frequent in most software development companies. Anyone telling you different is either lying, or not working as a developer for a living.
Tremendous workforce capacity – Regardless of race, gender or religion, the average developer/engineer is highly intelligent, technologically savvy, usually very up-to date with technical advances, and due to the high pace of technology, a person willing and able to learn new things in very short periods of time.
Frequently mixed duties – since the workforce has excellent capacity, it is quite often that the developers/engineers are given additional duties other then development.
Significant technical resources at hand – The developer/engineer has ready access to significant computing power and software tools, both in the form of a local workstation and server infrastructure.
Onsite delivery – Of course, while the product is still in the proverbial shop, and sales cannot invoice something that is not delivered. The delivery can take on several insecure forms
  • The product can be sent via E-mail - not encrypted and digitally signed
  • The product can be published on a Web portal or FTP server, again, not encrypted and digitally signed
  • The product can be burned on a CD and sent via some form of courier service without protection from possible stealing or tampering
Maintenance and support in various forms – The second and/or third level of support for a software product falls on the shoulders of personnel from the development company. This gives access to:
  • Error logs or crash dumps from the product which failed at the customer’s site, which usually contain a wealth of highly confidential information (usernames, passwords, confidential records etc)
  • Depending on support contract access to the customer’s test or production servers remotely, opening the possibility to be suspected of information theft or tampering
Inherently, the organization of a software development company presents the following security challenges
  1. The pressure to create a functional product with short deadlines can lead to development decisions which may prove extremely insecure in everyday usage of the product
  2. The pressure to solve issues in maintenance and support can lead to untracked and undocumented direct modifications of software or database schema at customer premises.
  3. The created product will be used at client’s premises – a security problem with the product can have dire implications on the customer’s business, as well as on the reputation of the software development company
  4. Although rarely absolutely necessary, an overwhelming majority of developers have full administrator/root privileges on their local computer and sometimes even on their coworkers computers
  5. Although never necessary, and always very dangerous, a vast majority of developers share the administrator/root account of development servers, databases and sometimes even network elements like routers and firewalls
  6. To reduce costs and optimize hardware utilization, the internal operations databases, business support systems, internal confidential file stores and the like are often supported and maintained on the same systems that house the development environments or databases.
  7. To achieve minimal headcount and maximal utilization of personnel, the infrastructure maintenance is often delegated to personnel who are also developers
  8. Proper and complete control over mobile devices is rarely instituted. With the access to most of the company’s information, an employee can easily transport an entire product source code, contract details or business plan outside of the company
These challenges can cause significant amount of security risks on a daily basis.
I will follow through with the discussion of the risks in my next posts.


Related posts
Personal Data Protection - Anonymizing John Doe



Personal Data Protection - Anonymizing John Doe

I got invited to attend a strategic meeting at a company specializing in medical software (For sake of confidentiality, let's call them Medic ACME).

Medic ACME needed a strategic position on the requirements presented by a customer for very stringent data protection. According to the presented requirements, the customer wants all patients history in a common database, but insists on minimizing the possibility of a leakage of confidential medical histories. The requirements are as follows:

  1. Their general staff (non-MD) must track all procedures and all diagnosis for logistical purposes, but should not see the names, SSNs and addresses of patients corresponding to the diagnosis.
  2. IT personnel should have no access to patient personal data (names, SSNs and addresses of patients) even if they dump the database.
  3. Upon request of an MD, the staff should be able to type in personal data of a patient to retrieve all medical history for the patient
  4. Actual personal patient data must be retained and usable for billing and administration purposes, but only to a very limited number of authorized persons

This set of requirements reminded me of the Payment Card Industry - Data Security Standard (PCI-DSS) requirements for protecting Credit Card data.

Medic ACME proposed to use a reversible encryption of all patient personal data. However, this method requires that the encryption key is common knowledge or at least distributed to a very large number of employees. Also, each access will require decryption of data, and an audit log of each access, thus increasing the load on the database and on the application.

My proposition, which is currently under consideration by Medic ACME was as follows:

  1. Copy all original personal data from the Medical database into a separate Patients database together with a reference number to maintain the connection between all medical interventions for that patient in the original database. All data in this database will be encrypted with an asymmetric key (public key encrypts, private key decrypts)
  2. Create a unique ID string of data from each patient's name, surname and SSN. In the original Medical database, create an SHA1 hash from each ID string, and replace the original personal data with this hash. The Medical database will now contain a nondescript hash and the internal reference number, so there is no personal data in the database to be stolen or read.
  3. Contact a public trusted PKI to issue a strong key pair for encryption. Store the public key in the system, and use it to encrypt all data in the Medical database. Store the private key on several smart cards issued to authorized personnel according to requirements.

This method will permit admission of new patients, with immediate encryption and hashing of their personal data. Also, with this method, it is possible to retrieve patients history by entering it's name, surname and SSN

For billing purposes, a special program needs to be written which will require using the private key for decryption of personal data, in order to correlate them with the medical history.

Talk back and comments are most welcome

9 Things to watch out for in an SLA

I wasn't planning to touch the issue of the Service Level Agreement (SLA) for some time, but it appears that the incident report (Link to Blog Post) has stirred attention that merit a post on the subject.


As i already mentioned, it is a very frequent occurrence that the SLA is just an afterthought when preparing a contract, and that the buyer is usually waiting for the supplier to produce the SLA agreement. Of course, this leads to the situation in which the SLA actually protects the supplier, not the buyer.So here are the items one must do to achieve at least a reasonable if not good SLA

  1. Remember that any SLA is open for negotiation, but only in initial purchase- although the supplier may propose a very rigid position on the SLA (especially common in large companies), the SLA is part of the sales process. Standing by a rigid position should immediately raise red flags that the proposed "unchangeable" SLA is protecting the supplier, not the buyer. So the best opportunity to negotiate it is during the initial RFP negotiations. Once the product/service is sold and goes into production use, the buyer has lost all power of negotiation. So be very wary to agree that you will negotiate the SLA after delivery, end of warranty or some similar wording.


  2. Define Availability as you would expect it - availability is usually calculated as a percentage of time the product under SLA is up and running. Usual numbers vary from 98% to 99.999% of the time. Now, let's examine the "time" factor in the formula. Upon first reading, a person will usually interpret that 98% will be 98% of any time measure, whether it be hour, day, month, year, century...But let's observe the following table:
  3. In a SLA contract specifying a percentage of availability per time period, the total downtime is accumulated over the entire time period. Furthermore, if there is no time period specified in the availability percentage, the default time period is the period of the validity of the contract - which is very often 1 year or more. So, if you sign a yearly contract with an SLA of 99%, it doesn't guarantee you that you will have at most 10 minutes of downtime per week. It means that you won't have more then 3.65 days (or 86 hours) of downtime over the entire year, which means that you can have full 10 8-hour workdays WITHOUT ANY SERVICE in that year. If you take the same 99%, but insist on applying it on a weekly level, you suddenly get much better odds - now, you can't have more then 1.68 hours of downtime in any of the days. So take a day of meetings in your company to define what is your maximum possible downtime per day, and use the above matrix to find the best option for you.


  4. Always keep in mind the distinction between reaction time and correction time - During the negotiation of an SLA It is usual to have very tense negotiations to achieve a good "response time". But this umbrella term is an excellent umbrella - for the supplier! Response time is defined as the time passing between formal logging of problem and until a representative from the supplier logs a response (sends a reply on e-mail, makes a phone call or arrives on-site). So when defining the response times, ALWAYS define two or three different times: reaction time - which is equivalent to response time, workaround time - the time in which it is expected to achieve a temporary solution which will alleviate the problem and correction time - the time in which it is expected that a final solution will be found.


  5. Make precise definitions of problem severity levels and tie them in with reaction and correction times - as in my previous post, the severity of the problem can be viewed differently by the buyer and supplier. So, define a clear matrix of severity levels, and have a clause which states that if severity level differently, the view of the buyer prevails. A sample of severity levels are presented in the table below:


  6. Define response time for all levels of severity - naturally, the buyer should expect faster reaction and correction for more severe problems. When defining the severity levels, in each one include at least the expected reaction time and workaround time.


  7. Define channels of communication and escalation - At first glance a very simple thing, but one that is very often a reason for not being able to dispute the SLA contract. For the problem to be considered properly reported, the supplier will expect a report from an authorized person to specific persons via email, fax number or phone. Any deviation from the agreed upon process is an excellent reason for not meeting SLA parameters on the grounds of "not being informed". So always have at least three authorized persons for problem reporting, and modify internal procedures so these persons are the first to be informed of a problem. The same is true for the escalation of problems to higher levels, should the problem persist.


  8. Define the conditions under which the SLA criteria are applied to a problem - It is not uncommon in SLA agreements to see that the SLA criteria start to apply from the time of problem reporting from the buyer to the supplier. This is an element usually insisted upon by the supplier, since it offloads the burden of monitoring and reporting on the buyer. By the time the problem is reported, the actual problem is already existent for several minutes up to half an hour. Even more so, there are products for which the supplier cannot perform the monitoring and cannot conclude that a problem is occurring. So although this point will not be applied in the contract, adjust internal procedures so that the authorized persons of the buyer IMMEDIATELY report the problem to the supplier. Internal metrics can be even applied to this process, to identify internal lags in communication.


  9. Define measurements and reporting - An SLA is useless if you can't measure and document each problem length properly. So the buyer should keep track of problems, with info on the severity, duration of problem, reaction time and correction time, with all relevant e-mails and messages exchanged. Tracking can be achieved with something as simple as an excel sheet, all it requires is regular update.


  10. Tie in penalties and contract back-out options - this is the actual big stick in the SLA. Breach of SLA parameters should be tied to serious penalties and possibility for contract termination. When defining penalties, always strive to define them in monetary value payable immediately upon breach of SLA. Also, you should try negotiate a penalty that has an exponential growth with each further hour of SLA breach. Do not accept a penalty to be compensated with other goods or services from the same supplier, since the supplier will value such services at sales price in the refund, while their internal costs for such services are significantly lower, thus reducing the actual loss of the supplier in SLA breach

Related posts
The SLA Lesson: software bug blues

Why don't you like my network?

I have a great respect for the network admins. It is their job to get the traffic form A to B, as fast as possible, and to do this while new requests for connectivity are piling up. I also have a great confidence in them, they do their job reliably and efficiently.

However, in the past weeks I have had the opportunity to review certain relatively large networks, and found all of them lacking in one aspect or another. And always when I express my reservations, the network admin(s) asked the aforementioned question: Why don't you like my network?.

Of course, It is only natural to be proud of your work and not accept criticism to it very well.
Here are the top reasons why the responsible network engineer should permit a friendly but unbiased outsider to have a view of the network once in a while. This outsider can take the form of a network management software, a consultant, or just your friend from school who has the network-admin job somewhere else.


  1. The author rarely sees his mistakes - this is true for any art or industry, including network design and management. A fresh view of things and a little bit of analysis can identify possible design flaws, errors or just bring a new idea for optimization to the table


  2. There are too many doorways into the network. All network administrators are only human at the end of the day. As the business grows, more and more entry-points into your network will appear: Partner networks, new services, management requirements, business oportunities etc. As this happens, it becomes easier and easier to forget adding a rule here, or relax the firewall rules just a bit more then required to make the service work without those troublesome glitches, or just create a less secure link as a temporary measure.


  3. There are things known, and there are things unknown - think you know everything that happens on your network? Think again - there are very few networks where all settings are according to policy and procedure. Consider this scenario: Admin A took a shortcut one evening and forgot to correct it, and admin C saw the added configuration. Admin A isn't here, so admin C assumes that a test is in progress for some project led by admin A. Suddenly, this glitch becomes a part of the configuration, and is soon forgotten. I can guarantee that there isn't a network in the world which has a only the policy approved set of rules and configuration.


  4. The users of late have become very creative - the users are becoming technically very experienced, but on the other hand thir security awareness is rarely on par with their technical knowledge. This can easily lead to situations where the users are trying to use services outside of the ones approved by policy, thus bringing programs via USB, CD-ROM, or through e-mail.


  5. Things are moving way too fast - new services are being created every day. The Business can identify hundreds of new opportunities per day, and require changes to enable usage of new services, literally overnight. In such situations, there is a huge opportunity to enable something without properly securing or protecting it. Oh, and when was the last time you checked whether anyone in the company has confirmed trusting an ActiveX control from a unknown web site?


  6. The network is all over the place! - the elements which you are utilizing and managing are not always in front of you, and you don't know precisely what's happening to them. Are you sure there are no broadcast storms behind all routers on yout network? When did you last checks what is the rate of packes dropped or fragmented at that branch office in the town 100 miles from head office?


  7. The outsider is not affected by your everyday business - Ofcourse, all those checkups could easily be done by the network admin himself/herself. So why bother with the outsiter? Simply because the outsider won't have to drop everything in order to check that stuck email of the manager, or to attend the staff meeting, or to start and manage the implementation of the new service which is behind schedule for rollout. The network admin is there to help the company, and the consultant is there to help the network admin.

The SLA Lesson: software bug blues

I have been hugely busy in the past weeks with several projects, so the blogging got stuck... I Will try to avoid this in the future. Now back to my latest experience

Part of every Information Security Management System is the incident management process. It is as process in which the company identifies a problem which is occurring or has ocurred, and performs steps to contain it, minimize the impact, identify the root cause and take measures to prevent the incident from recurring.

The incident in question is a dreaded application blocking - a company of 1000 employees uses a custom made fully integrated CRM/ERP system, which exibited complete or partial non-responsiveness of several minutes for a period of nearly two hours. This situation was identified at several departments, while the rest of the company is functioning as usual.

As soon as the call came in, the incident response team was formed and the problem was analyzed. After 15 minutes, the problem was identified. Accounting has started a program which should run once a week and affects the billing information of most Key Customers. This program was started at it's usual time, with usual parameters. The problem was rectified by stopping the processing and postponing it for after business-hours

Upon further investigation of the incident it was identified that the problem has occured before, at regular intervals, but was never reported as an incident. The situation has been handled by the IT department, who communicated the problem to the software company which created the software as a bug.

When i requested a status update from IT on this bug report, i received a shocking information: The software company has closed the bug report with a status of DENIED

So I called the release manager at the software company, and i got an even bigger shock: He explained that the software company decided to deny this bug report due to overwhelming change requests and bug reports from our company. In his words, this bug was a mere nuisance since it blocked part of the software for about an hour once a week - just run it during lunch!

At this point, the incident was no longer just an incident, it became a support contract issue, so i reported the situation to management and recommended an intervention from their side.

This incident is a very good lesson in the different priorities and focus of the parties involved:

For a user of the system any problem can be a show stopper.

For the manufacturer of the system, the same problem can be played down to an importance of an itch. There can be many reasons for such a difference in opinion, but here are a few:

  1. There are insufficient human resources to address the issue
  2. There are profitable change requests or projects to to address, so this element is merely postponed since the software company will not see a profit from engaging their resources into correcting this problem.
  3. The problem is caused by a design flaw in the system, that is either very difficult or impossible to rectify in a reasonable time and within reasonable budget

The only way to increase the value of the users' incident to the manufacturer is through applying proper controls and penalties in the support contract. That is why security incidents history and results should also be used as a very valid input into the preparation and negotiation of the SLA

Designed by Posicionamiento Web