Thursday, October 11, 2018

A long overdue reset

1:59 PM Posted by Bozidar Spirovski
It has been more than 5 years since I've last updated this blog. Going back and reading through the materials i find a lot of legacy and already forgotten tools and processes, some future mistakes and even some good long valued articles.

I decided to move all articles back to draft mode and will revise and re-publish those that are still relevant.

In the meantime, I apologise for the emptiness around here.

Sunday, September 30, 2018

Simple OSINT treasure hunt challenge

5:50 PM Posted by Bozidar Spirovski
I created a simple treasure hunt challenge for young InfoSec enthusiasts and professionals. It's mainly OSINT with some very basic crypto and a tactical story. No real prize except bragging rights and mentions if you choose so. 

No details collected and you can choose whether to submit the final solution.
The challenge starts on my Linkedin profile https://www.linkedin.com/in/spirovskibozidar/

The challenge will be active until the end of October 2018, after which a full writeup will follow.

Also, ideas for the next challenges are greatly appreciated :)

Monday, November 22, 2010

Steganography - Passing through the defenses

6:08 PM Posted by Bozidar Spirovski ,
Steganography is still considered to be a part of the obscure tools of secret agents and corporate spies.

However, steganography tools are widely available, and anyone can use them. Most of these tools are now available online. But a lot of systems currently perform some form of resampling or filtering of images.

This poses an interesting challenge - how survivable is steganography in filters?

This also gave us a great reason to publish another set of pictures (albeit cropped) of Lena Söderberg ;) Here is our original image


Proposed Counter-Steganography SystemThe filter system will need to be cost-effective, minimally intrusive and not prone to error. Since there may be many different steganography alghorithms, the filter system should not try to read such messages. Doing so will require an entire farm of filter servers. Instead, the systems will resort to a much simpler mechanism:

  1. Modify all passing images so that the original hidden data is compromised.
  2. Use only minute changes to images, so that the original user expecting to see an image cannot discern any loss of quality in the image
The Test
In our test, we will be using the Lena
Söderberg test image and we will perform tests using 3 common image enhancement filters. We will hide and open the message using the online tool at Mozaiq.Org

Our operating assumption is that a higher redundancy of the message has a higher chance of survival through a filter. Thus, our test message is the following:
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus in risus erat
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus in risus erat
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus in risus erat
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus in risus erat
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus in risus erat

Here is the image of Lena Söderberg with the message included within it


After hiding the message inside the image, we'll pass the image through different enhancement filters and then try to extract the message from the filtered image.
1. Sharpen Filter - The first filter to be tested is the Sharpen Filter. The filter is applied with Sharpness=2. After the application of the filter here is the image and the following message is extracted:


LoremJ� @�: ���Ѽsit�km t� consecf�t* ad piscin� u| tJ|�h s l����G�l�l� �h�z~� 5r�f�v��f�� ��j\)��5KT1��ķQo�s~cΓy?�� ɉ�C�$�� O�4E!L�r_x�߆��Ƥ �� b;��� \G;*W�.=� �1 楄 �M) Z*>֟ " °�N�(��%�J]u� �dRp�s���Χ �
G�?� e-e� E�͹g�� s�s�e�a�D�moF�O[t�h �ˀ2��i� _? � Լ�);c�s� &hD��DF �ͬ�8Q��1T� Cr!�us� �F�j�l߫��M-�_�Y��i�$�DIHQ�u�g����?0Xt�1c�� �ecTS� id_p�̦iG����Q�.�agaa��d��\�� ri u��

2. NF Filter - The second filter to be tested is the NF Filter. The filter is applied with default Alpha=0.30, and Radius=0.35. After the application of the filter here is the image and the following message is extracted:


Lo�eB�ٷs��7,� o_� � � ]t,(;��Rec�(ξrg d�p_sc nw g)�t� �kK�?1� o�nJ�8 �0;֦a �4�Cr� <��` RorLP �W�jd Fol�4ix " v����oo��� �� �i@^���r� l� ����=� l>SsC�nP �ą�v�)��EyC G�� p `8�2��Ʃ&��t��\�Yr�� Is�&t�tD>�%.�pͮǿ ��T �Z� Mha�e&l�s ƾ��`s���Mc

3. Unsharp Mask - The third filter to be tested is the Unsharp Mask. The filter is applied with Radius=1, Threshold=1 and Amount=0.1. After the application of the filter here is the image and the following message is extracted:


Error: The image that you tried to decrypt does not appear to have a message in it. It is possible that you entered the incorrect password. Please try again.

Conclusion
Once an image passes through a filter, any hidden messages will be corrupted. Redundancy in the hidden message helps but only against some types of image manipulation and only at very low levels of the filter.

Any digital picture retouch filter will damage the hidden message within a steganography image.
Naturally, this conclusion is nothing new - but through this test we can conclude that a small and very visually non-disruptive filter can cause a lot of damage to a steganography image.

Now go and check whether your outbound filters do something like this :)


Monday, April 12, 2010

Choosing a Disaster Recovery Center Location

2:11 PM Posted by Bozidar Spirovski , ,
When preparing a Disaster Recovery Center, one of the most important decisions is the location of the location of the Disaster Recovery Center. Up until the 9/11, a lot of companies held their DR centers in the adjacent building, and right after 9/11, everyone wanted to go as far from the primary data center as possible.


One of the common misconceptions of Disaster Recovery planning is that longer distance ensures better disaster protection. Of course, increasing the distance between data centers reduces the likelihood that the two centers are affected by the same disaster. But just putting distance between locations may not be sufficient protection. In reality, the best distance for a DR location is dictated by a multitude of factors:

  • Is the Cloud a good solution - these days the buildout of a DR datacenter may be completely redundant and just delay the DR implementation by many months and even years. If you can implement the DR solution in a cloud based service (remote datacenter or the major cloud providers) make sure you consider it. There are a lot of pros and cons but make sure you keep an open mind and make a proper review. 
  • Minimal parameters dictated by regulators - certain businesses, especially telco and finance must maintain regulatory compliance. It is not unusual for regulators to mandate minimal distance between the primary and the Disaster Recovery location. You must comply to these parameters
  • Corporate RTO parameters - the company has decided that the Disaster Recovery Center must be up and running within the time defined as RTO - Recovery Time Objective. This time will include the travel time to Disaster Recovery center and the system activation times. So it is always important to take this parameter into account when choosing a Disaster Recovery site
  • Telecommunications services - larger distance between the primary and DR site means higher telecommunication costs and limits the choice of appropriate remote copy technology. For instance, synchronous replication is still very difficult to achieve past the 50km mark. Choose a location that is sufficiently distant but still manages to deliver the required bandwidth for the chosen replication/remote copy technology
  • Geophysical conditions -In order to avoid a natural disaster, it is not always sufficient to move your Disaster Recovery center to a specific distance from the primary center. Most natural disasters deliver high impact in areas which support their spread by terrain configuration or other geophysical conditions. For instance, a safe hurricane impact distance was considered 150 km. However hurricane Katrina lost strength after over 240 km inland since there was no terrain feature to stop it. Best location should be in a separate flood basin, off a seismic fault line (or at least on a different one) and with a large mountain between the primary and the DR site
  • Means of Transportation - increased distance between primary and DR site may make it difficult for employees to travel to the recovery site. This is especially true in situations of crisis, when roads may be damaged or blocked, or public transport is stopped by strikes. Choose a site that has multiple travel options - railroad, motorway, even river boat
  • Vicinity of Strategic objects - It is never smart to place your Disaster Recovery center in the vicinity of objects of strategic importance to the country. Such locations are prone to terrorist attacks, and attack by opposing forces in a military conflict. Also, even in situations of natural disasters, strategic locations will have strong military presence that may limit access to your Disaster Recovery center. Strategic objects are military bases, airports, refineries and oil depots etc. Choose a safe distance from such locations.

There is no such thing as an ideal Disaster Recovery location. The optimal location is the one that minimizes the risks at an acceptable cost and meets the required SLAs and authorities' regulations. And take into consideration the possibility of the cloud!

Thursday, February 11, 2010

Telco SLA - parameters and penalties

5:13 PM Posted by Bozidar Spirovski ,
Communication links provided by Telco providers are critical to most businesses. And as any network admin will tell you, these links tend to have outages, ranging from small interruptions up to massive breakdowns that can last for days.

When such interruptions occur, businesses suffer, but unless the provider has serious contractual obligations, there is little effort on their side to improve service or correct issues.

That is why businesses need a good Service Level Agreement (SLA). Usually, the preparation of the SLA is dreaded by most, since it is full of numbers and parameters on which the client must decide what is acceptable, and whose values may be difficult to measure.

SLA Parameters
A good SLA is not necessarily loaded with a lot of numbers. You need to work with 2-3 parameters which are important to you. Here are the most frequent SLA parameters, with their acceptable values:
  • Availability - more then 99% for internet, more then 99.5% for corporate data links
  • Packet Loss - less then 0.4% for internet, less then 0.2% for corporate data links
  • Jitter - less then 15ms for internet, less then 5ms for corporate data links
SLA Penalties
And you need penalties which will hurt the provider. Penalties are the big stick in the SLA.Here are the penalties that you want:
  • small breach of SLA - 25% to 33% of monthly fee
  • large breach of SLA - 50% to 100% of monthly fee


Be aware that no provider will create an SLA that will eat much of it's profits. The committed provider can be identified by the type of Service Level Agreement (SLA) that it's prepared to sign without special negotiations.


Related posts

9 Things to watch out for in an SLA
The SLA Lesson: software bug blues
5 SLA Nonsense Examples - Always Read the Fine Print