Many scientific applications nowadays generate a few terabytes (TB) of data in a single run and the data sizes are expected to reach petabytes (PB) in the near future. Enabling fast extraction of knowledge through analyzing these large datasets holds the key to faster scientific discoveries. However, reading data from traditional storage subsystem is a slow process as the I/O performance lags far behind computational performance. Reducing data movement from the storage subsystem is widely considered a viable option for improving performance of data analysis. In this paper, we propose Segmented Analysis, a data movement reduction strategy through reusing results, where multiple similar analysis tasks process the same segments of data. The basic idea is to segment the data accessed in an analysis task, to process the data segments with a given analysis task, and to store the results of segments in a cache for future use. In future, when an analysis task needs to perform the same process on the data segments for which the results are available in the cache, the task can avoid moving data and performing computation for the available results. The Segmented Analysis framework contains modules for computation and I/O access overlap detection, in situ segmentation, and segment result caching. We evaluate the Segmented Analysis strategy by varying factors like the overlap rate among analysis tasks, the request size and the granularity of segmentation. We observed 2X to 13X I/O and to 2X to 8X computation speedups when the overlap is above 50%.