- Open Access
High performance computing environment for multidimensional image analysis
© Rao et al; licensee BioMed Central Ltd. 2007
- Published: 10 July 2007
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications.
We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup.
Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
- Domain Decomposition
- Message Passing Interface
- High Performance Computing
- Cartesian Grid
- Image Processing Task
Progress in biology is dependent on the ability to observe, measure and model the behavior of organisms at multiple levels of abstraction, from the microscopic to the macroscopic. There has been a tremendous growth recently in the techniques to probe the structure and workings of cellular and organ-level mechanisms. Significant advances have been made in areas such as serial block face microscopy , and knife-edge microscopy  that allow microstructure information to be gathered at unprecedented levels of both detail and scope. At the same time, advances have also been made in gathering temporal image data streams from microscopic samples with the use of fluorescent and multi-photon imaging techniques . The increasing spatial and temporal resolution available, combined with advanced sectioning techniques are providing extremely content-rich data to biologists and puts unprecedented power in their hands.
This is a game-changing development, since biologists are no longer limited to carrying out experiments to test a single hypothesis at a time. They are now able to vary multiple parameters simultaneously, and observe several phenomena of relevance using multi-spectral techniques. This can be combined with recent advances in data mining techniques to determine relationships and correlations amongst the many variables of interest. This allows a significantly larger parameter space to be explored.
However, there is a mounting problem being faced by practitioners and researchers today, which is the computational bottleneck: the data storage and processing needs are growing exponentially. It may take several hours or even days to process the collected data, and the resulting throughput time may be unacceptable to support desired workflows in laboratories. Unless the computational issues are addressed immediately, biologists will be overwhelmed with the data collected, and will not have adequate tools to process and extract meaning from the data. Though computer vision techniques have been applied in the past to partially automate some of the analysis (e.g. ), the current challenge is to process much larger quantities of data (several gigabytes typically) with sufficiently high throughput. This would allow biologists to interpret experimental results rapidly and ideally in an interactive fashion.
A related problem is that of building models from the collected data, which is a useful technique to test the understanding of the phenomena of interest. As the data expose interactions at finer spatial and time scales, the variables that are modeled also increase in number and complexity. This increases the computational burden on the modeling effort as well.
We present a solution to this challenge, which is based on a high-performance computing (HPC) architecture. An example of this is IBM's Blue Gene Supercomputer. There has been a long history of using HPC to model problems in physics, but its use in biology has been very recent and rather limited. In general, HPC has not been used much in bio-imaging applications due to the difficulty in porting code to parallel machines. Algorithms for image processing, such as segmentation and feature extraction are not being sufficiently developed and investigated in a HPC context. Though there was interest in this area in the mid-1990s , this appears to have waned, and the use of HPC for imaging applications is currently quite limited.
However, the landscape is rapidly changing due to the increased availability of HPC platforms, improvements in parallel programming environments (such as the emergence of the Message Passing Interface as a standard), and the availability of toolkits to perform parallel data mining. HPC has significant potential to be applied to problems in biology, and in microscopy imaging applications in particular. The high computational demands of simulation and modeling complex systems can also be addressed through HPC. So a single HPC architecture can support multiple computational requirements, ranging from analyzing data to building and simulating models.
We now examine the computational requirements for a total system dedicated to handle biological imaging tasks. The following tasks would need to be performed.
(1) Data collection: the system needs to gather and store the images acquired (2) Deconvolution: the acquired images may contain distortions such as blur that occur in the front end optics (3) Segmentation and feature extraction: the raw images need to be segmented into regions of interest and processed to extract domain-specific features that aid in classification (4) Analysis and interpretation: the segmented regions are interpreted as parts of a larger biological structure such as a neuron or organ. (5) Modeling and prediction: models for function (e.g. neural connectivity models) can be built to predict the behavior of entities of interest. This phase may require the use of simulation and optimization techniques. From the list of tasks above, the following core computational requirements can be identified (1) The ability to handle large data sets, including storage and retrieval. (2) High throughput processing is required. (3) The visualization of results is important.
As a case study, we consider the system developed by Denk and Horstmann , that consists of a serial block-face scanning electron microscope (SEM) to explore 3-d connectivity in neural tissue. Light microscopy is incapable of resolving the fine structure such as dendrites, which necessitates the use of a SEM. The system is able to obtain slices that are 50 nm thickness, with a 50 × 40 μ m area, and 27 nm pixel size. This results in single images of size 4 megabytes. Typically, 2000 slices are obtained, giving rise to a stack of 8 GB of data. Several such stacks need to be collected to gain information about neural connectivity in a functional area of the brain. The connectivity within the neural tissue is inferred by identifying structures within the 2D slices, such as neurites, and tracking them across successive slices. This permits the reconstruction of a 3D model of structures of interest, such as a neuron with its soma, dendrites and axon. The ultimate use of such reconstructed structures would be in developing accurate computational models of cortical function.
The system developed by Denk and Horstmann  operates at the nanometer scale. A similar system developed by McCormick et al  operates at the micrometer scale, and can section an entire mouse brain. The computational requirements arising from both systems are very similar. The need for time-varying image analysis arises from biological experiments such as the analysis of macrophage images from the BioImaging Institute at MIT . The goal here is to observe the cell motility of macrophages under different ambient conditions. This requires 3D deconvolution to be performed on a sequence of images. Again, the computational needs of such experiments are enormous.
In this paper, we restrict our scope to solving the problem of high-throughput processing. Since the performance of single CPU machines is not sufficient to handle the size of the case-study data set described above, the use of parallel processing is inevitable. We examine the characteristics of our case-study data set.
1. Significant communication is required between processors. This is specifically true of image processing tasks.
2. The bulk of the communication is between nearest neighbors.
3. Computation and communication needs are well balanced. For instance, consider the operation of recursive 3D median filtering, which is useful for combating noise. Every iteration of the filtering operation may require significant amount of data communication. Suppose we use 1024 processors for the 8 GB stack, with each processor storing 8 MB of data. Assuming up to half this data needs to be communicated, we have a communication need of 4 MB per process to be sent/received to/from its 26 neighbors. This is a large amount of data especially since it may need to be communicated at every iteration of the computation.
There are many possible ways of using parallel processing systems to meet these requirements, such as clusters of workstations, shared memory systems and the use of game processors. We chose to implement our system on the IBM Blue Gene supercomputer due to its scalability (to upwards of 100,000 processors) and its specialized interconnect technology, offering superior communication speeds.
In order to make advances in microscopy, progress needs to be made on several fronts simultaneously, including new methods for image acquisition, processing algorithms for feature extraction and analysis, and the computational architecture and methodology for fast processing. The focus of this paper is on the latter issue, and deals with the use of parallel computation to address the throughput requirements. There are other research efforts to investigate algorithms for feature extraction and reconstruction . Our goal in this paper has been to show how such algorithms can be implemented on a parallel architecture.
We expect that other image processing tasks, such as iterative morphological operations (e.g. dilations followed by erosions), or recursive filtering (e.g. recursive 3D median filtering) will also benefit from implementation on an HPC platform if they operate on large datasets. In general, any local neighborhood operation can be computed advantageously using our method as shown in Figure 2.
The procedure described to produce the result in Figure 4 is semi-automated. There are other approaches in the literature which are more sophisticated, such as the technique developed by Busse et al , which is also semi-automatic. We have chosen to explore a parallel processing solution to the reconstruction problem, and are initially implementing simpler approaches.
Figure 5 shows that the Blue Gene/L communication network is superior to that of the Linux cluster used, and that the communication overhead scales with the number of processors. The Linux cluster did not exhibit appropriate scaling behavior due to the configuration of the switches, which were likely deployed in 32 node banks.
Other studies by Almasi et al. have demonstrated that a wide variety of scientific applications can scale up to tens of thousands of processors on Blue Gene/L . We view this as a significant advantage in using the Blue Gene/L platform. The image processing application can be written once using MPI, and the hardware platform provides the desired scaling.
The significance of the result in Figure 6 depends on the precise task being performed. If the task is computation bound, then improvements in communication may not have a significant effect on the throughput. However, for communication-bound tasks, the demonstrated improvement of the cartesian mapping may be significant.
Other studies, such as Agarwal et al.  have also demonstrated an increase in communication times when the task is not optimally mapped to the processors on Blue Gene/L. This shows that the mapping of communicating objects to nearby processors is desirable.
To summarize this discussion, it is advantageous to use an HPC architecture that optimizes nearest-neighbor communication on a 3D grid. Depending on the task to be performed, this communication efficiency may result in significant throughput increases as compared with other network topologies. The comparisons shown in this paper are not meant to be comprehensive, and the benchmarking of image processing applications on HPC platforms needs further investigation. This may require a community-wide effort to create and measure appropriate benchmarks on multiple HPC platforms. As an example, NIST has been conducting a series of TRECVID workshops on measuring video image database retrieval performance for several years .
Our system has been designed with MPI, a de-facto standard environment for parallel processing. This enables the system to be used widely across different platforms. Given the increasing popularity of grid computing, and the increasing availability of supercomputers, we expect to see wider usage of parallel processing techniques in areas such as microscopy. In order to fully exploit the available parallel platforms, we recommend that students at universities should be trained to use such technology. Furthermore, curricula in courses such as image processing and computer vision should cover parallel processing techniques.
Our results demonstrate that the use of HPC can have a dramatic improvement in the throughput of image processing tasks, and that HPC has tremendous potential to influence the fields of bio-imaging and microscopy.
The main contribution of this paper is to develop a parallel image processing system for handling multidimensional image data that optimizes computation and communication needs in a multiprocessor system. This is achieved through an appropriate domain decomposition that exploits support in MPI for computation in cartesian grids.
Our results show that by using the Blue Gene/L machine, significant throughput increases can be achieved compared to conventional clusters. Furthermore, the domain decomposition and algorithms presented in this paper show favorable scaling behavior as the number of processors is increased.
This paper presents early implementation results, and further work needs to be done to incorporate more sophisticated image processing algorithms in this environment. Additionally, time-domain processing capability needs to be added.
Based on the requirements in the background section, we propose the following solution. We use MPI (Message Passing Interface), a widely used message passing library standard. There are several open source implementations available for different computing platforms.
The compute nodes in Blue Gene/L are interconnected through multiple complementary high-speed low-latency networks, including a 3D torus network. We directly map the total 3D image volume that needs to be processed onto a 3D torus. This is done by partitioning the 3D image volume into the number of processors available, and ensuring that neighboring processors on the Blue Gene/L machine are dedicated to handling neighboring 3D image partitions. Each processor stores up to half the image data from each of its nearest neighbors in order to minimize communication overhead.
This domain decomposition allows efficient implementation of moving window operators such as median filtering. Furthermore, by performing such operations on the 3D image data, as opposed to multiple 2D image slices, we are able to use the full available 3D information to combat noise and ambiguity.
Operations such as cell boundary extraction can also be carried out efficiently. This is because the bulk of the inter-processor communications is between nearest-neighbor nodes on the 3D torus, for which dedicated hardware connectivity exists.
We assume that the image sequences are synchronously captured at a constant number of frames per second. Each processor can update its 3D volume data with the specific 3D image segment it is responsible for processing over the next time slice. The proposed domain decomposition will allow the imaging operations to be carried out effectively over large time-series data sets.
The domain decomposition is illustrated for the 2D case in Figure 7. Here, a 2D image is decomposed into 8 × 8 tiles. Consider the tile indicated by R. Suppose we are performing an operation such as recursive median filtering. At every iteration, the tile R may need to exchange data with its 8 nearest neighbors as indicated. This is because the computation of values within R may depend on the values of pixels within neighboring tiles. Similarly, Figure 8 illustrates the 26 nearest neighbors in three dimensions.
There are two phases involved at every step of an iterative algorithm, the first being communication and the second is computation.
To facilitate communication, we set up a 3D Cartesian communicator grid based on the number of processors available. Let us assume that each processor executes a single process. Data partitioning is then based on the process rank assigned by MPI. For communication, each node sends information to its nearest neighbors. This is implemented by using the MPI sendrecv command between appropriate pairs of processes. For instance, a process can send information to its neighbor to the north and receive information from its neighbor to the south. This procedure avoids deadlock during the computation. The data resident at each process is initialized by reading the appropriate partition from the disk.
Once each process has the required data, computation can proceed. Appropriate algorithms for image feature extraction and processing are implemented. This procedure of communication and computation is performed until all the data have been processed.
Image processing algorithms
In order to process Denk and Horstmann's dataset, we used techniques published in the literature. The first processing step is to apply 3D median filtering . This removes noise in the images while preserving the edges. We used a simple implementation for the 3D median filter, rather than more complex implementations as described in .
The second processing step is to extract contours corresponding to the neural structures, such as dendrites. There are many possible solutions to this problem, and the one we chose to implement is based on the concept of deformable templates, or snakes. We implemented the technique described by Xu and Prince . Briefly, the snake is an energy-minimizing deformable shape, where the energy is a function of its internal elastic energy and an external force field. The force field is computed from the image data, e.g. the gradient of the image, as shown in Figure 3.
The authors are grateful to the anonymous reviewers for their helpful comments which improved this paper. We would also like to thank Rahul Garg, IBM India Research Lab, for several useful discussions and suggestions for performance analysis.
This article has been published as part of BMC Cell Biology Volume 8 Supplement 1, 2007: 2006 International Workshop on Multiscale Biological Imaging, Data Mining and Informatics. The full contents of the supplement are available online at http://www.biomedcentral.com/1471-2121/8?issue=S1
- Denk W, Horstmann H: Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol. 2004, 2 (11): e329-10.1371/journal.pbio.0020329.PubMed CentralView ArticlePubMedGoogle Scholar
- McCormick BH: Construction of anatomically correct models of mouse brain networks. Neurocomputing. 2004, 58–60: 379-386. 10.1016/j.neucom.2004.01.070.View ArticleGoogle Scholar
- Evans JG, Correia I, Krasavina O, Watson N, Matsudaira P: Macrophage podosomes assemble at the leading lamella by growth and fragmentation. J Cell Biol. 2003, 161 (4): 697-705. 10.1083/jcb.200212037.PubMed CentralView ArticlePubMedGoogle Scholar
- Webb J: High performance computing in image processing and computer vision. Proceedings of the 12th IAPR International Conference. 1994, 3: 218-222.Google Scholar
- Busse B: Development of a semi-automated method for three-dimensional neural structure segmentation. Society for Neuroscience Abstracts, Volume 834.13 II28. 2006Google Scholar
- Xu C, Prince JL: Snakes, Shapes, and Gradient Vector Flow. IEEE Transactions on Image Processing. 1998, 7: 359-369. 10.1109/83.661186.View ArticlePubMedGoogle Scholar
- Almasi G: Early Experience with Scientific Applications on the Blue Gene/L Supercomputer. Proceedings of the Euro-Par 2005 Parallel Processing: 11th International Euro-Par Conference, Lecture Notes in Computer Science. 2005, 3648: 560-570.View ArticleGoogle Scholar
- Agarwal T, Sharma A, Laxmikant A, Kale L: Topology-aware task mapping for reducing communication contention on large parallel machines. Parallel and Distributed Processing Symposium, IPDPS. 2006, Digital Object Identifier 101109/IPDPS20061639379.Google Scholar
- Adiga RN: Blue Gene/L torus interconnection network. IBM J Research and Development. 2005, 49 (2):Google Scholar
- Hurt SL, Rosenfeld A: Noise reduction in three-dimensional digital images. Pattern Recognition. 1984, 17 (4): 407-422. 10.1016/0031-3203(84)90069-4.View ArticleGoogle Scholar
- Senel HG, Peters RA, Dawant B: Topological Median Filters. Ieee Transactions On Image Processing. 2002, 11 (2): 89-104. 10.1109/83.982817.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.