Multi-core technology is an emerging hardware trend that provides significant capabilities for computationally expensive applications. However, it also demands a paradigm shift in the software industry, Software developers need to think about the best distribution of software components across the available CPUs, and trade-off the computational efficiency against the cost of re-structuring the standard sequential execution of software. The relationship between the measured performance and the corresponding parameters such as the number of threads and CPUs remains an interesting open problem, especially since it is a challenge to conduct controlled experiments. This paper reports a case study on the use of Solaris containers to control the assignment of threads to the available CPUs in a set of applications. We model the performance as a function of the number of threads, the number of CPUs and the type of program. We use two different modeling strategies: linear regression and Neural Networks, which are applied to the well-established Java Grande benchmark. We observe that there is a nonlinear relationship between these parameters and the associated performance. In addition, neural network models are observed to be consistently better at estimating the performance over a range of parameter values. The results reported in this paper can therefore be used to suitably re-structure software programs to fully utilize the available resources.