Improvements in medicine and healthcare are accelerating. Information generation, sharing, and expert analysis, play a great role in improving medical sciences. Big data produced by medical procedures in hospitals, laboratories, and research centers needs storage and transmission. Data compression is a critical tool that reduces the burden of storage and transmission. Medical images, in particular, require special consideration in terms of storage and transmissions. Unlike many other types of big data, medical images require lossless storage. Special purpose compression algorithms and codecs could compress variety of such images with superior performance compared to the general purpose lossless algorithms. For the medical images, many lossless algorithms have been proposed so far. A compression algorithm comprises of different stages. Before designing a special purpose compression method we need to know how much each stage contributes to the overall compression performance so we could accordingly invest time and effort in designing different stages. In order to compare and evaluate these multi-stage compression techniques and to design more efficient compression methods for big data applications, in this paper the effectiveness of each of these compression stages on the total performance of the algorithm is analyzed.