Steps to Ensure a Smooth(er) Migration to a Cloud Service

Moving a service to a cloud provider can be a very beneficial activity (reducing cost, piece of mind, transfer of risk), but it can also create a huge amount of problems if not done correctly.

We will not delve into what SLA and service conditions are agreed on with your service provider. We will focus on the migration process.

Assuming you have selected a service to migrate to a cloud provider, and have selected the cloud provider, even after contract signing, things may still be far from complete. The migration process is the thing that can be very painful and can break the entire service for an extended amount of time. And sadly, the service provider may not be too interested in properly supporting you in the migration process, for whatever reason.



To ensure a successful migration, or at least to be able to 'pull on the handbrake' before disaster strikes, make sure that you check the following elements before driving into the migration process:
  • Clearly understand what data from the current service will be migrated into the cloud service - this is crucial from several points of view: If there is migration, you must understand the amount of data can and will be migrated, whether the service provider has sufficient space to accept all data or you'll need to prioritize and whether the format of the data remains the same. For instance, you may be using a MySQL database but are migrating all data into an Oracle cloud service. Also, if data is not migrated, you'll need to keep it available to the users as legacy data.
  • Clearly understand the migration process of the data from local into cloud service - if existent the migration of data can vary wildly. It can depend on very complex factors like change of format, structure, proxying etc, or very simple like bandwidth to transfer the files over.
  • Understand authentication source of the cloud provider - all your services were authenticated to a data-set within your company, usually a LDAP server or a database. You must understand which data-set can the cloud provider support for authentication, because you may need to recreate your user's accounts and generate and distribute new passwords to them.
  • Gather all usage scenarios of the service as it is currently delivered (in house) - there may be multiple usage scenarios for a service that have been introduced through the years, either officially or unofficially. For instance, a mail server can be accessed via POP3, IMAP, MAPI (on Exchange servers), and different users may be using different protocols.
  • Confirm which usage scenarios are supported by the service provider - your users may need to be reconfigured in advance or at the moment of migration. You need to understand which steps you'll need to take to maintain minimum outage for the users. This is usually tightly connected to the authentication source and set-up.
  • Ensure you have bandwidth - Going into the cloud means remote access. And whatever your in-house service was, you never cared about bandwidth usage and latency over your gigabit LAN, but that bandwidth usage may be very significant. Observe your current network using network analysis tools and learn more about broadband packages that you use, especially their flexibility to quickly increase bandwidth or decrease latency if needed on roll-out time.
  • Know who to call - at time of migration and right after that, things are going to be hectic, issues will rise all over the place, and your team will be less than their usual competent self, since they'll also be using a service. Have all of them read through the SLA and the communication and escalation procedures of the cloud contract. This way the issues will be escalated rapidly, and support call will be made much faster.
  • Understand your fallback options - any migration can go wrong. In order to be able to continue your original service in such a scenario. Investigate whether your original service will be available during after the migration, and look and test for any risks that the migraiton may leave your in-house service broken. This may be a huge issue if somehow there are problems.
  • Make a plan with outage period and ability to go back to your service - before you go into migration, make a plan for the migration, in which you'll define the migration period start and finish times based on testing results. The entire period of migration should be planned as  downtime, and the source service should be in a 'frozen state' (no new entries in it). The reason for such a downtime is two-fold: Even if the migration is online, if anything goes wrong, you are under less pressure to fix it, and by creating a frozen state of the source service creates a point-in-time to which you are prepared to revert in case of trouble.
  • Inform everyone of the pending change - spread the word, the customers should be well aware of the change. Informing everyone is about people being able to plan and adapt if the service is out, but it also helps you and your team - you'll get more feedback and discover overlooked items, and during the crunch time the users will give you more breathing time instead of jumping on your throat because their service is not working. 

Migrations are a very stressful time for everyone, and hopefully the above points will help both you and your customers survive them in a smoother manner.

Talkback and comments are most welcome

Related posts
Management Reaction to Failed Cloud Security
 Security Concerns Cloud “Cloud Computing”
Maintaining quality in outsourcing telco services
5 SLA Nonsense Examples - Always Read the Fine Print  




Fairwell to Ray Bradbury


For the man who illustrated our imagination, and made me personally read more...

rest in peace, Ray


Ray Bradbury, 1938-2012

Observations of lack of research in social engineering

Phone call social engineering is considered the easiest methods of social engineering: It does not involve personal contact, and leaves little in way of electronic trail (e-mail can leave much more eletronic trail if not approached properly).


In the past months Shortinfosec had the fortune to review an social engineering attack performed by a pen-test team on a company. While the pen-test was considered a failure by the client, significant elements of the attack point to open issues with the client. Publication of this information is based on the provision all information regarding the pen-test client and provider location, business and identity to be unidentifiable.

The attack
The social engineering attack was performed over a phone line, not even being in the same city as the client, with the pen-testers using publicly accessible lines. The targets of the attack were chosen from social networks.

The attack was three-stage:
  1. Collect information about order delivery process (delays, timing etc...)
  2. Collect information about current order in pipeline (order prepared but not delivered to customer)
  3. Divert order to different address.
 The attack was performed by multiple phone calls, which created contact with multiple targets. Each call was a probing attempt to collect as much information possible. The first and second stage of attack was targeted at the same targets but with several days delay between stages. Two persons performed all attacks.
  • In the first stage of attack, the attackers simulated a disgruntled customer, which insisted on getting details on the process as his delivery was not proper. Approximately half of the targets responded were either compliant to explain the process, or were unable to reach the account manager and proceeded to divulge information to the attackers.
  • In the second stage of the attack, the attackers approached targets that were deemed 'soft' - that were most compliant and divulged most information. They misrepresented as persons from multiple client companies, until they received information of a current order in pipeline. A minor number of targets responded with required details, simply because they most targets did not have access to order information. 
  • In the third stage of the attack, the attackers again approached the 'soft' targets attempting to divert the order from pipeline to a different delivery address. Most targets did not have the authority to change the delivery address. The attackers reached a target with appropriate authority, but that target contacted the real client while on the phone to verify. The client denied any change, which caused the all kinds of alarms to go off. At the end, police were notified immediately, and the pen-testers nearly ended up in custody.

The review
When investigating the approach used by the social engineering attack, we found missteps in the following areas:
  • The process research - the failure of the attack had one primary reason: The requested redirection address was outside of the free delivery area, and the targeted person actually sent out an electronic invoice to the real client for the redirection. This invoice was rushed by the client's accounting department since it was for an outstanding order, and immediately disputed by the client, thus exposing the attack. This shows insufficient research of the process
  • The selection of targets - the targets of the attack were selected purely by one criteria: anyone who has a public information regarding their employment at the pen-test client on social sites. This approach is easy, but there were very little criteria of how useful these targets are in the further stages of the attack, and how they tend to react. This caused multiple calls of relatively low quality information or response in the first and second stage - thus spreading the attacker resources thin.
  • The selection of faked client - the faked client was not researched, and was selected by random from the information received in the second stage of the attack. The client should have been approached to research its process. A contact center channel would be an excellent 'cover' for such a task. This is especially true since the pen-test client operates via a phone channel. But instead researching the client through impersonation of an anonymous service like an Appointment Setting Service, the attackers merely dropped a name of a client. This lack of research, combined with insufficient process research caused the inability of the pen-testers to prevent the invoice reaction.
Apart from these missteps, the actual amount of achieved information gathering was quite interesting: The attackers collected information about business process, customers and current orders. Even without being able to redirect an order, the collected information could be valuable for sale to competitors or for publication to discredit the business.


The conclusion
This particular case was deemed by the pen-test client as a failed social engineering attack, but that is obviously a purely formal treatment of the outcome.
The missteps in the process which were identified are not uncommon in a pen-test scenario, where deadlines are short, and results need to be produced by the pen-testers on time and under budget.  The entire process and results has lessons for both pen-test client and pen-test team:
  • The pen-test team should reserve sufficient time in the project schedule for investigation, which is crucial when playing with the emotions and reactions of human beings. 
  • On the other side of the fence, the pen-test client is still quite exposed, with information leaking left and right, which was  proven by the amount of information collected by a pen-test team with relatively small amount of research.


Talkback and comments are most welcome

7 Problems with Cell Phone Forensics

Cell phones don’t feel newfangled but in truth they are. With innovation comes swift change, sometimes so swift that it is difficult for forensic scientists to keep up. Criminals use cell phones in a variety of crimes and it is up to the forensic scientists to uncover their transgressions. But where do they start? What are some complications that scientists encounter?

   

  1. Innovation - Change is the number one issue for forensic scientists to overcome. Even the cell phone manufactures don’t always know how to retrieve information stored in new phones, so how can scientists retrieve the information? Staying up-to-date on new cell phones is challenging but not impossible. As fast as they are created, criminals come up with ways to abuse them. Strangely enough, this can be beneficial for forensic scientists. Using online tips can allow scientists to simply access information that would otherwise remain unreachable.
  2. Charge – Unlike computers, much of what is stored in a phones memory is reliant upon the battery. When the electricity goes, so does the information. Depending on what information you are looking for and how it is stored, battery or charger power is an essential thing to think about.
  3. SIM cards and removable media - SIM cards are the soul of a cell phone. They carry vital user information. Likewise, removable media, such as SD cards, can have lots of stored data on them. It is important that forensic scientists have the appropriate equipment to read and evaluate the data.
  4. Passwords – Password protection on cell phones is challenging to overcome, though not impossible. Depending on the model, passwords can be circumvented in several ways.
  5. Internet connection – The smarter cell phones become, the harder they are to examine. Using an internet connection instead of SMS or voice makes a forensic scientist’s job much more difficult.
  6. Quarantine – One thing that is often disregarded is the need to sequester the cell phone before analyzing it. New text messages can overwrite old material, and connections to the internet can invalidate old data. It is imperative to make sure the phone is isolated.
  7. Security augmentations - Forensic scientists must be especially alert when dealing with cell phones that have been improved in some way. Some users have the capability of putting in dead man’s switches, effectually wiping the contents after an action or a period of time. Malware can also be downloaded onto the phone, placing the computer systems in danger.

There are many more problems for forensic scientists to watch out for, but these are the seven most common. Tracing cell phone data is a laborious task, but it can be done. All it takes is a little investigation, a few tools, and a lot of persistence.

This is a guest post by Coleen Torres, blogger at Phone Internet. She writes about saving money on home phone, digital TV and high-speed Internet by comparing prices from providers in your area for standalone service or phone TV Internet bundles.


Talkback and comments are most welcome

Related posts

When Will Your Mobile Phone get Hacked?
Is Geo Location Based DDoS Possible?
Is the Phone Working? - Alternative Telephony SLA  

Support Free Internet - Stop SOPA and PIPA

Stop SOPA and PIPA: We openly declare our support for the efforts to prevent the ability for governments to police the Internet.


Kudos to Wikipedia


Talkback and comments are most welcome

Related posts

Privacy Ignorance - Was Eric Schmidt thinking?

Failed attempt at optimizing InfoSec Risk Assessment

Last weekend I got into a discussion with an insurance supervisor on the topics of risk assessment. He explained the process of work of actuaries in insurance, and that there are standardized tables of probabilities for an event to occur, like sickness and death, and how it is used to calculate insurance premiums.

After digesting the explanation, my reaction was that I found the holy grail of the Information Security Risk analysis: All it takes is for enough amount of incident event be collected into a statistical table, and all possible types of information security incidents will have a standardized table of frequency and impact - no more assessments over the entire organization!

And in such a great and utopian solution, at least a quarter of the time the information security personnel will fell like they are doing actuarial jobs.

But I was quickly brought back to reality by the expert in insurance, with a good question: Actuarial tables are compiled based on information that is mandatory to be published - illness, fires, theft, even death. How will you collect accurate information from information security, when it's not mandatory to publish them?

And he was perfectly correct: Collecting information to compile an actuarial table for information security will be impossible. There are very few companies in the world that will release any information that there was an information security incident if it hasn't impacted the public in a very obvious way. Also, the value of the impact is calculated in any number of methods, and different items are included in the value, making the valuation of the incident an incomparable attribute from one incident to another.

Having a standardized method for risk assessment in information security based on hard numbers would be great. But since the factors included in any incident are very complex and varying, and also consistent incident reporting is nearly impossible, we will be sticking to the current qualitative methods.


Talkback and comments are most welocme

Related posts

Example Risk Assessment of Exchange 2007 with MS TAM
Risk Assessment with Microsoft Threat Assessment & Modeling  

The Difficult Life of Mac in the Mixed Environment

Just before the sad event of Steve Jobs death, obtained a MacBook. While everyone is still immersed in reading the biography, we embarked on the journey of using a new OS for the first time. Here are the positive experiences and gripes that we found when using it in a multi-purpose multi-platform environment.

Please note that we are just starting up using the Mac, and some of our issues may have solutions that we haven't found yet.


The environment
The MacBook arrived in the very mixed environment of Shortinfosec

  • Domain - an active AD Win2008 functional level domain, but used only for testing. The computers are only added to the domain to do research related to the domain.
  • Computers - Work is done on our laptops - HPs, Lenovo and Acer running Windows 7, Vista and Ubuntu.
  • Virtual environment - Virtual Box and VMWare player based virtual machines, mostly bridged network
  • Network - 802.11 n Wifi and wired 1 Gbps Ethernet network. Cisco and Huawei network elements
  • VPN - Cisco IPsec VPN for remote access
  • Storage - iSCSI based storage server, built around an Openfiler storage server, on the wired LAN segment
  • Printing - a very old HP LaseJet printer, so old that we have to use a Centronix to USB convertor, so we attach it to any laptop we need.
What we do on this environment:
  • Testing and honing skills of attack tools
  • Running test scenarios on corporate products
  • Active Directory fiddling and trying to break
  • Playing games
  • Blog management
  • A lot of article and paper writing
  • Java development
  • Odd accounting jobs
  • Lots of games ;)

The positives
We like to start on a positive note, so here are the things we like about our Mac
  • User experience - as Steve Jobs insisted, the user experience of working in Mac Applications on the Mac is seamless. Everything just runs. Even attaching external hardware a 20 year old printer was a breeze - much easier then doing the same on Vista.
  • Battery life - the battery life is simply outstanding. The commercials say that the Mac can do 7 hours on battery, and that is quite true, for working in word processor, at 65% screen brightness.
  • Portability - not really comparable, since all other laptops are 15'', but the Mac is very easy on the shoulders, and an excellent companion at meetings.
  • Speed of functions - all implemented functions within the OS are implemented VERY WELL. For example, the Cisco IPSec VPN connection using the native Lion client authenticates at least 10 seconds faster than the Cisco VPN Client for 64bit Windows 7 (we actually measured)

The gripes
Naturally, not everything is that great, and here are the frustrations that we faced with our Mac.
  • The keyboard shortcuts - putting an IT pro who worked on a PC and Unix for 20 years in front of a Mac running OSX is a special kind of hell: NONE of the keyboard shortcuts are the same, and it a significant effort to shift to OSX shortcuts. They are not illogical, only completely different, which hampers productivity for anyone used to do much of their work on a keyboard.
  • Interoperability with other platforms - There are interoperability gripes with a lot of stuff. The Mac can join an AD domain (sort of), but we had a lot of stress getting the Mac to use cached credentials. Mostly the same happened with a Linux based LDAP service.
  • Software is missing - A lot of productivity software that we are used to is missing for Mac - we stumbled on Visio, then on MS Project, then on Notepad++, then on 7zip... We didn't go into developing Java in Eclipse, because of the following point. Mind, there are replacements for most of the software we were missing, but productivity was hampered since we needed to find the appropriate software, buy it and learn how to use it. VMware player is nonexistent for Mac, we are limited to VirtualBox.
  • Lacking native support for obvious items - first disaster - no support for NTFS write. We had to revert to the dreaded FAT32, which was a deal breaker for development. As if that wasn't enough, iSCSI is not natively supported, which further killed any attempt at accessing the large Java codebase on our iSCSI fileserver.
  • Remote access - So far we haven't discovered an efficient native tool to access and work on our Mac remotely. The Apple Remote Desktop is a shameless highway robbery - why should any company or user need to pay any money to access and manage a single Mac remotely? We are at the moment trying out VNC, which is not a very preferred platform.
  • No Native or Free Disk Encryption - (Updated, thanks to comments on reddit.com). Up to OSX 10.6 only Sophos SafeGuard provided full disk encryption for a Mac. For OSX 10.7 there is FileVault full disk encryption, but we haven't tried it.


Conclusions and thoughts
We are not abandoning the Mac - it is a great tool and an asset in our little lab. But in the current state of things, it takes a lot of effort and compromise to fully migrate to a Mac platform, especially since a multi-environment knowledge is required.

If today someone asks us whether a Mac is a good idea for company use, we would not be very supportive
for the following reasons:
  • Business Software lack of compatibility
  • (Updated per the comment of Ryan Black) Incompatibility with writing to NTFS filsystem (which is everywhere) (previously stated NTFS fileservers - fileservers are accessed through SMB, which is supported)
  • Learning Curve for efficient use


Talkback and comments are most welcome


Related posts
Information Risks when Branching Software Versions
8 Golden Rules of Change Management

Choosing Data Storage - A difficult dance

IT has come a long way in the past 15 years, and definitely has advanced into the realm of commodity service. But there are still complexities under the hood of this commodity service. One of the most underestimated in complexity is data storage - it is taken for granted by everyone. For example, i frequently talk to a high ranking manager in a software company and he constantly states that all that is needed is another disk.



At the end of the day, data storage is very far from simple. Every organization needs to provide storage service for it's requirements. But storage is not only capacity, and one must be careful when choosing the appropriate solution for storage. There are three basic options at the moment:

  • Cloud storage services
  • Open Source based storage systems
  • Commercial enterprise storage systems

We will evaluate each service from the following key parameters of a storage system

  • Capacity - The first (and usually only) thing we think about when we talk about storage - and the easiest to achieve. Regardless of option for data storage, capacity is upgradeable. In open source storage systems which are based on commodity hardware, upgrades are limited to the abilities of the host server/box. The enterprise systems are much more upgradeable, but at high costs. For a cloud storage provider, capacity upgrade is nearly infinite (at least on paper). It is wise to plan ahead and consider whether future ability will support your requirements.
  • Input/Output Operations per Second (IOPS) - The usually forgotten and very difficult to assess parameter, but nonetheless very important. The IOPS should present the amount of operations that the system can perform on a storage within a time-frame of 1 second. But since read and write operations on a storage can vary (sequential or random, read or write, even there are front-end and back-end IOPS when using RAID configurations). Cloud storage services do not publish IOPS, Enterprise manufacturers always publish the IOPS number that is most beneficial to them and the open source solution mostly leaves the IOPS to the builder of the system. In any case the end result is, DO NOT TRUST THE NUMBERS. There are some nice estimation calculators online, like wmarow's iops calculator, but use them only for reference. The smart solution is to test the storage service in a configuration as close to the one you wish to use, and assess whether performance is acceptable.
  • Access Bandwidth - This is not disk bandwidth, which is calculated via the IOPS. The access bandwidth is the bandwidth between the server and the storage itself. Naturally, you want this to be as high as possible. For enterprise storage systems, discussing access bandwidth is moot, since such storage is mostly connecting through Fibre Channel which has multiple links of 2, 4 or 8 Gbps. For open source storage systems, which are mostly iSCSI based, the access bandwidth starts with 1 Gbps with Ethernet overhead. For cloud storage services, access bandwidth is a significant factor - cloud services are accessed through WAN links, where access bandwidth is limited and may be prone to congestion. When choosing a storage system, test your application with the bandwidth you are planning on using.
  • Redundancy and high availability - What kinds of failures and incidents can a storage system survive? Cloud services claim that they can survive a lot - short of a cataclysmic event or a nuclear bombing - but such claims should be tested. Enterprise storage systems are designed to survive nearly any hardware issue within them, and provide abilities to replicate to other systems which are at a distance of tens of kilometer (naturally, at a high high price). Open source storage systems redundancy is dependent on actual hardware redundancy of the box the customer built, and provide some technologies for replication, which are in a different level of maturity. Always consider placing the data based on the importance to the company - can you survive without it?
  • Actual hardware - storage systems are comprised of well known components - hard drives, controllers, interfaces, power supplies. For both enterprise storage systems and for cloud service the customer does not need to bother too much with the hardware - the provider constructs and combines the required hardware. On the other hand, when preparing an open source storage, the customer usually builds the hardware which means finding appropriate hard drives, RAID controllers, redundancy in power supplies, caching mechanisms, LAN and FC interfaces. Building a system from scratch is a great experience, but commodity devices may be prone to much more failures then specially built hardware. Testing is not very useful here, but think ahead of the very possible risk of failure of commodity components.
  • Reporting - Once the storage system starts working, reporting becomes an immediate issue. The customer will want to know the load on the system, on individual hard drives and logical devices, response times, utilization trends etc. Again, enterprise storage systems shine in this area with an excellent portfolio of reporting tools, albeit usually with exorbitant prices. Cloud storage services may provide some reporting but not too in-depth, and the open source systems usually lack poorly, since the open source project is focused on functionality, not reporting. When choosing any storage system, always ask to look at the live reports from the service/system you are planning on using.
  • Support - Again, once the storage system starts working, there will be problems. And I guarantee you - the problems will not be simple: either it works or it doesn't. There will be all kinds of complicated and seemingly impossible combinations of issues. And this is exactly where the customer will need support. But there is no clear-cut answer to which type of storage system has the best support. One must tread carefully here, because good support is about having trained support personnel, but also having very dedicated support personnel. By definition, enterprise storage systems have a great advantage in this area, but this advantage can easily be ruined by a support team that juggles many projects, is used as presales or is simply not dedicated to supporting a customer. Cloud services fall in much the same category, but it can be difficult to discuss storage issues with a cloud storage service: the engineers are impossible to reach, there is insufficient data to support an issue (reports, analysis) and the cloud service provider has usually a well crafted SLA to protect themselves from most issues. The open source systems are an issue of support in a different way - since the systems are built with software which is written by many, there are rarely any real experts to support such a system, unless you pay someone - and even then it may be a risk.
  • Vendor lock-in - Cloud storage services are the strongest player in this area - if the customer chooses a cloud storage system as an important part of your infrastructure, it will adjust it's operation to the cloud system and create a 'symbiotic' bond, thus making the migration very costly. Enterprise systems are much easier to migrate from, since they are basically just huge hard drives. If all else fails, an operating system level copy command will provide a very crude but always successful migration. Open source storage systems have no lock-in: simple hard drives, where migration is a copy-paste operation.

Conclusions
There are multiple pros and cons across our storage systems parameters, but at first glance, the enterprise storage systems have the upper hand. Bear in mind though, such systems always come with exorbitant pricing, especially on any upgrades after the initial purchase. Therefore, such systems may be well suited for the mission critical applications, but are too price prohibitive to be used for every and any use within a company.
The cloud services are extremely flexible in expansion capacity and redundancy (at least on paper). But quality of service and support may be lacking, as well as issues in speed of access. So cloud based storage may be only logical if you rent the full package - server plus storage in the cloud, to guarantee an overall service level. The remaining issue is lock-in: once you start using a cloud provider, leaving it may be a challenge, since you have adjusted your operation to it's service and it may be costly to shift providers.
The open source systems are an interesting project, and can provide a very cheap solution for a lower tier functions. But in order to actively use such a system would mean to dedicate an employee or a team of homegrown experts on the open source storage system, to properly support the system. Also, redundancy and high availability can become an issue in such systems.

In summary, do not choose only one storage solution: The enterprise system is well suited for the business support, but it is a huge overkill for a test or proof of concept systems. Cloud storage services are a good choice for a cloud based infrastructure, but the lock-in issue requires careful strategic approach before lock-in occurs. So use everything, and always evaluate any solution for at least 3 months before committing to it.


Talkback and comments are most welcome


Related posts
RAID and Disk Size - Search for Performance

Designed by Posicionamiento Web