Keeping unneeded sensitive data off your computer

During everyday work our computers collect all kinds of information: E-mail is received, browser history is recorded, files are created. In all this exchange, a significant amount of sensitive data can be collected, even without intervention of the user (CC in e-mails).

Most of this data is not of much daily use to a user, and is in fact a liability. It is a very good practice to check what information has the computer gathered over the course of the daily work, and clean out the unnecessary sensitive data.


The definition

First, let's define sensitive data. University of California defines sensitive data as

Information for which access or disclosure may be assigned some degree of sensitivity, and therefore, for which some degree of protection or access restriction may be warranted. Unauthorized access to or disclosure of information in this category could result in a serious adverse effect, cause financial loss, cause damage to the reputation, constitute an unwarranted invasion of privacy


The test

Everyone's first reaction is: 'This can't happen to me!'. It is well known that a lot of computers get sold with huge amounts of sensitive data still on them. So we performed a simple test: We ran the tools on the laptop of a university assistant professor. These are the results:

  • 3 of his credit card numbers were saved in the browser history
  • 7 e-mails containing lists of students social security numbers were discovered in e-mails from Student Services where the user was placed in CC, and only briefly read.
  • 4 files with home addresses of project team members and partners were discovered, from a project that has ended 2 years ago.

Anyone making the check will be very unpleasantly surprised at the amount of sensitive data on their computers

The tools

This definition makes a great point: If you don't work with it, remove it! To ensure that your computer is free of sensitive data you can use several tools to locate possible sensitive data. Bear in mind that no tool can determine conclusively what is or is not sensitive data, but automated tolls are great in sifting through gigabytes of information to locate patterns of data that resemble sensitive data.

We have compiled a list of 3 tools that can help you in discovering potential sensitive data on your computer. The tools are ordered in alphabetical order and each is presented with it's own pros and cons.

Identity Finder
  • Commercial application that can be used to find sensitive data, as well as providing other functions such as protection of identified files.
  • Pro: Apart from standard credit card numbers or SSN, it also searches for the string password: and thus can find a lot of cleartext stored passwords. It is quite efficient in it's search and offers quick solutions, like destruction of identified files with sensitive data, or protecting data. It is also capable of searching Outlook PST files. The enterprise version apparently works with web sites, but Shortinfosec was not able to test this functionality
  • Con: It is a commercial application, so you need to pay for it :)

senf
  • A simple Credit Card Number and Social Security Number search tool from the University of Texas designed to look for Social Security Numbers and Credit Cards.
  • Pro: Nearly no configuration effort, just start it and send it searching.
  • Con: Not useful for anything except SSN and Credit Card Numbers.


Spider
  • A very good open source tool for finding sensitive data.

  • Pro: Allows great flexibility of searches and is quite near the range of a commercial application. Although not as easy to use as a commercial counterpart, since it supports search for regular expressions, you can search for nearly anything. It is of searching Outlook PST files. Also, it is capable of searching web sites, which functions quite well.
  • Con: you need to know regular expressions to make the most of it, and the presentation of results is not very clear, especially in Outlook PST files

Conclusion
The sensitive data scanners are a very useful set of tools. Although they are all plagued with huge numbers of false positives, they also find the really nasty forgotten sets of data which everyone will be better off without.
So, a periodic scan for left over sensitive data is a very good practice to maintain security of your computer. This is even more true for enterprises, where this check-up should become part of the regular security awareness program and security check of corporate computers. A home user can achieve excellent results with open source tools, but for enterprises which require centralized management and reporting, a commercial solution may be an option.

Talkback and comments are most welcome

Related posts
5 rules to Protecting Information on your Laptop

5 Famous Hacker Profiles: White and Black Hats

Hackers, like the cowboy heroes in classic Westerns, come with either a white or a black hat. Some wear both, but most can be distinctly classified according to the way they use their abilities: for good or for evil. Black hats tend to wreak hacker havoc for personal gain or just to have fun with the general population by testing their skills and exploiting computer systems.

White hats, on the other hand, use their abilities to help create hacker-proof systems or occasionally bend laws to create innovative and exciting technology.
The following list of famous hackers includes both white and black hats, since the bad guys should never get all the attention.


Stephen Wozniak
Woz” is a white hat who is well-known for being the cofounder of Apple. His first hacking endeavor was to make free long-distance calls by creating “blue boxes” to bypass phone-switching mechanisms, and some of his college friends claimed that he had called the Pope, pretending to be Henry Kissinger.
Even during his college career, Woz worked with Steve Jobs (Apple’s CEO) to market his blue boxes to classmates. The hacker then dropped out and began working on a mainframe computer, which Jobs helped bring to the public. After a long and successful career, Woz has left Apple and now focuses on philanthropy, providing new technology and computer equipment to the Los Gatos School District in CA.


Kevin Mitnick

You’ll probably recognize this name as a definitive black hat hacker, but he later donned the white hat as a security consultant. Mitnick started his hacking career by manipulating the LA bus punch card system to get free rides, then (like Woz) became interested in blue boxes, finding a way around long-distance phone call payments.
His hacking behaviors escalated and he was eventually convicted for hacking into multiple systems of the Digital Equipment Corporation (DEC) to view Virtual Memory System (VMS) code, costing DEC an alleged $160,000. Mitnick also admitted to stealing software from Motorola, Novell, Fujitsu, Sun Microsystems, and other companies in addition to altering the computer systems of the University of Southern California.
After serving his sentence of five years, this hacker started Mitnick Security Consulting, LLC and is now turning a profit as a white hat.


Jonathan James

This black hat became famous for being the first juvenile hacker to be sentenced to prison, caught at age 15 and prosecuted at 16.
James hacked the Defense Threat Reduction Agency (DTRA) of the Department of Defense, NASA, BellSouth, and the Miami-Dade school system, stealing confidential information and software valued at nearly two million dollars.
The young hacker insisted that the NASA code he stole was intended to supplement his studies of C programming, but that it was “crappy” and not worth the $1.7 million price tag claimed by NASA. His actions cost the space program $41,000 in damages to its computer systems.


Adrian Lamo

Lamo is a “gray” hat-turned-white hat and currently specializes in threat analysis, journalism, and public speaking. He’s been using his hacking skills to help identify security flaws in the networks of Fortune 500 companies and isolate leak sources that threaten homeland security.
However, prior to this white hat streak, Lamo hacked into Microsoft, The New York Times, and Yahoo! News using Internet connections in public places such as coffee shops and libraries. He consistently found ways to penetrate systems, then informed companies of their vulnerability; however, because he was not hired to do this, it was seen as a threat.
His nearly-black hat career escalated when he began viewing Social Security numbers and giving himself clearance within company systems to access other confidential information. Lamo was ordered to pay $65,000 in restitution to The New York Times and underwent home confinement and probation before donning his white hat.


Kevin Poulsen

Black hat Poulsen’s hacking tended to involve telephone lines, and he used his unusual skills to manipulate radio shows and contests. By taking over all phone lines used by KIIS-FM radio in LA, he took the liberty of “winning” a new Porsche and other prizes. Following this performance, Poulsen hacked into a federal investigation database and viewed wiretap information on “secure” computers.
Other hacking offenses include reactivating old numbers from the Yellow Pages and crashing the phone lines meant to receive information about his whereabouts during an Unsolved Mysteries special. After being ordered to pay $56,000 in restitution and serve over four years in prison, Poulsen decided that a white hat might look good on him and used MySpace profiles to identify 774 sex offenders. He’s also worked hard to become senior editor of Wired News.


This is a guest post by Alexis Bonari. She is a freelance writer and blog junkie. She is a passionate blogger on the topic of education and free college scholarships. In her spare time, she enjoys square-foot gardening, swimming, and avoiding her laptop.

Talkback and comments are most welcome

Related posts
8 Tips for Securing from the Security experts
5 Ways to fail a Social Engineering Pen-Test

GFI WebMonitor - A good step ahead

The Web Content Filtering and Security products are already a maturing market. The need for monitoring and controlling user access to the Web is identified as critical for today's businesses

GFI Software is entering this market arena with a solution named GFI WebMonitor. This product is available either as a standalone proxy version that works in most network environments or as a dedicated plug-in for organizations that have deployed Microsoft ISA Server.

Installation
The installation is very easy, and the only really critical step that the admin needs to make a decesion in which mode the software will run. GFI WebMonitor can run in the following modes:

  1. Simple Proxy mode - In this mode, GFI WebMonitor operates on a server with a single NIC and functions as a proxy. In order to use it, block direct access to the Internet from the clients and set their browsers to use the GFI WebMonitor system as a proxy.
  2. Traffic forwarding mode - In this mode, GFI WebMonitor works 'inline', and acts as a router/proxy. To operate in this mode, you need to install GFI WebMonitor on a server with two NICs and routing ability (like Windows RRAS)
We will observe the operation of GFI WebMonitor in Simple Proxy mode - a mode that is easier to set-up and which will be the default choice of most companies.According to the documentation, GFI WebMonitor is designed for corporate use. In order to understand how GFI WebMonitor matches the corporate expectations, let's define a corporate environment scenario in which GFI WebMonitor will have to perform:

Corporate Scenario


Internet users
A typical corporate organization will have the following Internet users:
  1. Standard Internet Users - The generic corporate grunts, people who are not expected to use the Internet during most of their work day. Their Internet access is limited to most basic Internet access, and download of PDF, Word and PPT files of maximum 2 MB size.
  2. Power Internet users - Power Internet users, requiring access to a lot of Internet locations, and who regularly download documentation (PDF, Office) and media (audio, video, flash) from the Internet. These files can be of a larger size, up to 50 MB.
  3. Management - The top brass, which although would use the Internet very rarely, they should not feel as if they are much limited
  4. Exceptions - For research or testing purposes, exceptions of all rules must exist
Corporate policy
The typical corporate organization has a Internet access corporate policy. Here is a sample one:
  • Rules for all users
  1. No access to gaming sites, porn sites, narcotics or alcohol abuse sites, gambling sites, spamming and hate mail, racism and hate sites, job search sites, social media and instant messaging sites, web based e-mail services, virus and malware sites, hacking or exploitation sites, personal financial gain sites.
  2. No workaround bypass of this policy is permitted
  • Rules for Standard Internet Users
  1. No access to news sites, media sites, file sharing sites
  2. Download limit set to 5 MB per file
  3. Permitted files - HTML, Images, XML, PDF, PPT, DOC(X), XLS(X)
  4. No malware should be downloaded
  5. Limit bandwidth to a maximum of 10kbps per user
  • Rules for Advanced Internet users
  1. No access to file sharing sites
  2. Download limit set to 50 MB per file
  3. Permitted files - HTML, Images, XML, PDF, PPT, DOC(X), XLS(X), AVI, MP3, MP4, FLV, VSD, Archives containing these types of files
  4. No malware should be downloaded
  5. Limit bandwidth to a maximum of 150kbps per user
  • Rules for Managers
  1. Download limit to 500 MB per file
  2. Permitted files - PDF, PPT, DOC(X), XLS(X), AVI, MP3, MP4, FLV, VSD, Archives containing these types of files
  3. No malware should be downloaded
  4. Limit bandwidth to a maximum of 250kbps per user
Internet usage reports must be submitted to Information Security Officer per request and in a Monthly automatic report

GFI WebMonitor Performance against scenario

We have used all functions of WebMonitor to simulate the corporate scenario as close as possible. We have set up groups for web filtering and download access, and tested for normal functionality.

GFI WebMonitor has a simple but useful tactical dashboard for overview



Web Filtering Control

The good
  • All restricted areas can be set-up in the web filtering control, and were properly blocked with a restriction message. If default policies are not sufficient, you can include or exclude manually, or you can also suggest categorizing a site GFI's database, so it gets into policy automatically.

The issues
  • The minor administration issue that we found is that the categories are not explained, and it took us some time to discover that Instant Messaging is defined as Internet Communications. A dynamic description should appear as a category is selected - this will make the admin's life much easier.
  • The functional issue that found is that there is no bandwidth control for anyone. GFI might discuss that this is not a function of a content filter, but there are products which provide these functions.

Download Control
The good
  • The download controls can define the file types that can be downloaded
  • The integrated proxy can save the already downloaded files, thus reducing internet link load


The issues
  • There is no file size limit to apply to groups. So corporations cannot limit users to downloading only certain size of files and thus preventing of hogging the Internet link.
  • Download restrictions can be bypassed by hiding files within other files (Zipped executable, embedded as an object within a word file)
  • Selection of items in download control is a bit difficult, since you need to open each item specifically. This is mostly a cosmetic issue, but it can nag the administrator

Spyware and virus protection

The good

The antivirus protection worked as expected, and it identified the test EICAR virus simulation file


The issues
  • The antivirus protection worked on the second attempt. The first time EICAR was downloaded and wasn't detected as a virus. We checked the antivirus engines and found that they have remained in Downloading and updating status for the entire 5 days of testing. After we forced the update to finish (required a reboot of the GFI WebMonitor computer and about 1 hour of patience) , the EICAR file was detected as a virus threat. We can't identify the reason for this behavior

Phishing protection

The good
  • The phishning control is very effective. We tested against a fresh phishing site (at time of test only live for 5 hours) It was properly blocked both by GFI WebMonitor as well as Firefox Phishing protection. The site for testing was selected from PhishTrack


    Instant Messaging Control

    The good
    • We tested with Windows Live messanger, and notifications are properly delivered to the administrator.

    The issues

    • This function looks more like a nice idea then a real functionality. It only functions for Microsoft IM Protocols, and is not useful for Skype, XMPP - (Jabber), YMSG (Yahoo), Gadu-Gadu. These protocols will either pass undetected or will not work at all.

    Reporting

    The good
    • GFI WebMonitor has a brief set of reports integrated within it's engine, and it has a free ReportPack add-on especially for reporting.



    Conclusion

    GFI WebMonitor is a nice step in the right direction. The product is very easy to install, and the company that starts using it can see it's benefits by the end of the first day of use.
    It matched all the basic requirements of our sample scenario, and only failed at the most advanced expectations. We have some reserve about the antivirus, but this is probably due to error in our installation or a bug that will befixed.

    In order to evaluate whether GFI WebMonitor meets your requirements, simply note down your corporate scenario, and install the evaluatoion version. You'll be able to evaluate the match to your requirements very fast.


    Talkback and comments are most welcome

    Managing Antivirus Software - Keep the reinstall away

    Having an anti-virus on your computer systems is one of the standard best practices for every computer user, regardless of whether you are home user or a business.

    Although there are a lot of users (both corporate and home users) that consider the anti-virus a useless weapon, it still provides a very real protective layer on your computers. No anti-virus is 100% effective, but even at 80% effectiveness, it means a whole lot less problems with malware.


    Here are some simple guidelines for selecting and managing your anti-virus environments:

    Home Environment

    Managing an anti-virus in a home environment is relatively easy. Most users have 2-4 computers in the home, and they need to set-up an anti-virus on everyone of them. The most important elements are

    • Regular updating of signatures from the manufacturer
    • Active real-time protection
    • Regular (weekly or monthly) scheduled scan
    In order to keep your home anti-virus system in good condition, you need to
    • Set the antivirus to perform automatic cleaning with quarantine (no delete) - this way even if you get a false positive, the file isn't deleted and you can rescue it from
    • Check the update version - check whether updates are still current and there are no issues with updating
    • Review the last scan results - this way you will be alerted if malware is identified
    • Review the quarantine - to find if false positive files were captured by the anti-virus and need to be 'rescued'
    Choosing the product
    Then it's about the price and functionality. The home user can choose a free product, or they can buy antivirus protection. Here is a sample of criteria to review when choosing the anti-virus:

    • Legitimate antivirus software - What you need to be very careful about when implementing a home antivirus environment is that the product be really an anti-virus. Wikipedia references the SpyWare Warrior that more and more malware masquerades as legitimate anti-virus. In order to avoid these malware decoys, you can reference the Wikipedia list of anti-virus software .
    • Range of malware that you are protected from - Can the engine detect virus, spyware, rootkits, etc.?
    • Behavior-blocking - Does the antivirus monitor system calls with a heuristics engine to prevent vulnerability exploitation attempts and zero day virus breakouts?

    Corporate Environment

    Managing an anti-virus in corporate environment is a lot more work. There are hundreds, even thousands of computers that need to be protected. In such an environment you need to battle the following battles:
    • Keeping clients up-to-date - when updating hundreds of computers, there will be issues - computers that are off, computers where the antivirus software has failed for any reason, issues in communication with the update server
    • Keeping clients compliant to policy - same as above, updates to policy may fail or be in significant delay
    • Preventing the anti-virus servers from overloading - updating hundreds of systems can cause hogging of the update server or the Internet link.

    In order to keep your corporate anti-virus system in good condition you need to
    • Set up updating frequency according to corporate policy - updating the anti-virus in a corporate environment needs to be planned - updates may be needed more then once per day, but if you make the updates too frequent you'll end up overloading the antivirus server with requests.
    • Balancing the load of management and updates in a distributed environment - When you have branches, it is wise do distribute the burden of updates and management to branch servers and administrators.
    • Implement additional policy elements- anti-virus software may also be used to enforce corporate policies of not running some software in certain parts of the day (example - block media player from 9 to 12 and from 2 to 5)
    • Schedule automated scans - similar to the home users, scheduled scans are good for confirming that nothing is sleeping in downloaded documents, unopened files etc.
    • Schedule automatic reports - Your best for keeping the corporate antivirus infrastructure in good condition is an automated report. This way, a report on the number of non-updated

    Choosing the product

    When implementing a corporate anti-virus solution, the criteria of choosing a legitimate (non-malware) antivirus is not important - there are no malware products designed to operate as a corporate antivirus systems.
    And even if someone tries to make such a malware, it will be immediately identified, since corporate anti-virus solutions are constantly evaluated - both by independent technology sites and companies, and by other manufacturers of anti-virus solution - to assess the competition.

    But there are other criteria for corporate anti-virus that need to be evaluated. Here is a sample of criteria:
    • Range of malware that you are protected from - Can the engine detect virus, spyware, rootkits, etc.?
    • Behavior-blocking - Does the antivirus monitor system calls with a heuristics engine to prevent vulnerability exploitation attempts and zero day virus breakouts?
    • Expanded functionality - System firewall. Does it provide blacklists and white lists for addresses and domains?
    • Policy control - Does the antivirus provide controls to enforce corporate policies regarding use of certain elements of the computer system? For example, an antivirus system may provide policies to prevent running of certain applications, although they are not malware, or prevent access to usb storage devices etc...
    • Signature Updates - How large and frequent are signature and other updates? This can range from one per day to multiple updates per day. This is a very significant issue - a signature that is updated once per day, it can be quite large, so in a large corporation the update process will hog the central antivirus server.

    Conclusion
    Depending on whether you are running a home or corporate environment, you face different challenges with antivirus solutions. But regardless of environment and product, you will be very grateful that you are running an antivirus the day someone you know looses data or re installs their computer due to a virus corruption.


    Talkback and comments are most welcome

    GFI WebMonitor Review

    GFI has published an opportunity to review their WebMonitor product. It is designed as a competition, with some prizes for the best reviewers.

    Shortinfosec will be performing the review, but we will focus on the product quality. So, our readers may even expect some rants and constructive criticism.

    Anyone wishing to perform the review can find it at
    http://www.gfi.com/blog/software-reviewers-wanted/

    Is Geo Location Based DDoS Possible?

    A while ago Shortinfosec published an article by Michael Coates about Geo Location based DDoS
    The article sparked some interest, and we decided to delve deeper into this issue.
    Shortinfosec performed a basic analysis of the possible impacts of Geo Location based DDoS


    ITU has published that there are 4.6 billion mobile phones worldwide. That is a truly formidable number, and quite capable of performing a DDoS attack on any mobile network.
    But creating a DDoS attack isn't as simple as it looks - especially a Geo Location based DDoS. In order to make a DDoS attack, you need the following ingredients

    Software that will make the attack
    - The software will have to use the geo location function (to know where the phone is) and telco function (to create the DoS) of the mobile phone. Variants of s software are available and can be developed with relative ease for any smartphone platform. Example of apps that use Geo Location and telco functions are GPS tracking apps (for child tracking, or employee tracking) as well as 'Cheating Spouse Spy' apps. They enable access to the geolocation and send out data streams or SMS messages. Some of them are even remotely controllable via SMS. They can be easily modified to create a DOS via SMS or data stream swamping.
    Means of distribution of the attack software
    - In order for a DDoS attack to succeed, you need a high volume of attack ('zombie') devices. In a Geo Location DDoS you attack something which is at one geographic location, so zombie phones need to be at or around the target location. This means that you need to persuade a lot of people to install the attacking app needs on their phones. There would be two options for this task:

    1. An App that everyone will like - This is very hard to achieve, since whatever your App is - even a game, the percentage of people that will like the game can be very limited. Also, you need to develop this App for a lot of platforms, since there are a lot of phone manufacturers and everyone has several different OS platforms.
    2. A self-distributing (virus like) application - poses a whole set of challenges: A virus can self-distribute either through a vulnerability of the Operating System, or through user action (like sending an SMS with instructions to install an app). Phone users do not readily install new apps simply because an SMS instructed them to, and good luck finding vulnerabilities in a sufficient amount of platforms and versions of phone OS.
    Sufficient concentration of Geo Location enabled zombie phones in the targeted areas - Now this is a real numbers game with a lot of interesting results. Targeted areas will be large metropolitan areas which are focus of large businesses - which will have the highest concentration of zombie phones, and where most damage to the reputation of the mobile provider can be done.

    To estimate the number of zombie phones in any given area, we need some starting parameters. We'll use worst case scenarios for every parameter

    1. Geo Location enabled phone percentage in total phone population (between 24% and 95%) - Gartner estimates that smart phones take up 18% of the total number of mobile phones. We'll assume that every smart phone has Geo Location ability, and we'll use percentages higher then 18%, since the target area is going to have a greater population with the means and needs to have a smart phone. For US, we'll use 95% simply because of the FCC E911 phase 2 directive, which mandates that 95% of all subscribers of the US mobile networks to have some form of Geo Location.
    2. Percentage of phones that will be targeted by the attack app (51%) - since there are multiple manufacturers and platforms, the attacker needs to attack the population with the highest probability of success - the largest phone population with similar characteristics. We'll use the percentage of penetration of platform - Symbian, which according to Gartner had 51% market share of all smart phone platforms.
    3. Successfully zombified phones (20%) - the target population of mobile phones cannot be fully controlled. The widest penetration of a virus infection was the Melissa virus, for which it is estimated that it infected between 15% and 20% of all computers worldwide. We'll use 20% for good measure.
    4. Area where most attack phones will reside (4 million square meters) - on a business day most Geo Location based phones will be within the city business area. For a city of over a million inhabitants, this area is at least a 4 kilometer by 4 kilometer square (2.49x2.49 miles). That is 4 million square meters.
    5. Concentration of zombified phones (50% within the attack area) - on a business day we will assume that 50% of zombie phones will be within the attack area
    Based on these parameters, we created a table which calculates the number of zombified phones in large metropolitan areas throughout the world. (click for full size)


    Analysis of the table
    Assuming the the parameters of the analysis can be met (especially the number of phones that are zombified), here are the results of the numbers

    Overwhelming the network - highly unlikely: The maximal number of zombie phones represent from 2.41 to 9.7 percent of the total phone population for urban areas. The mobile network switches are designed to handle traffic spikes, so they'll will be able to handle the increase of maximum 10% of the total city population.

    Overwhelming the central area - possible : Long before the DDoS attack can overwhelm the network switches, it will hit a bottleneck: the mobile radio cells have a technical limit of number of active calls, so in a DDoS scenario the mobile cells where most zombified phones reside will be affected.

    Overwhelming Hot spots - very likely: Even within the attack target area, there are hot spots with huge concentrations of mobile phones - large office buildings and business parks. These hot spots are rarely treated with a dedicated set of cells, and the DDoS attack will most likely overwhelm the available cells.

    In simpler terms, on a business day, the cells in the business area of the city will be have more requests for service then available channels, so there will be a lot of No Service or No Network within the central attack area.

    Detection and remedy - at least several hours: The mobile network operator will immediately identify the overwhelmed cells, but it will take hours to identify the pattern of who is creating the congestion. Even then, the remedy will not be simple, and will come down to disabling service for every identified zombie phone. This will take several hours the first time around. But once this particular type of attack is identified, a lot of effort will be put into creating automatic or semi-automatic detection and disabling systems, so after several attacks this correction will be brought down to a maximum of several tens of minutes. Also, mobile operators have the financial means to go after the initiator of the DDoS with every available investigative and legal tool


    Conclusion

    The parameters in this table are based on a worst case scenario, but based on current numbers of phones and estimated Geo Location ability

    The estimation assumed that the attacker can actually install the attack app into 20% of Geo Location enabled devices. This assumption is very far fetched, and therefore, the entire scenario is not very realistic.


    The future may be darker - if we start using a common mobile platform, similar to the Windows prevalence in the PC world, and with the Geo Location function becoming either a commodity or even a mandate, the parameters of the analysis can change dramatically - and make the mobile networks vulnerable to DDoS attacks

    Talkback and comments are most welcome

    Related posts
    Geo Location based DDOS can target Mobile Operators

    Designed by Posicionamiento Web