Performance evaluation of container-based virtualization for high performance computing environments
Published 2019-07-16
Keywords
- Container-based virtualization,
- Linux containers,
- Singularity,
- Docker,
- High performance computing
How to Cite
Abstract
Virtualization technologies have evolved along with the development of computational environments. Virtualization offered needed features at that time such as isolation, accountability, resource allocation, resource fair sharing and so on. Novel processor technologies bring to commodity computers the possibility to emulate diverse environments where a wide range of computational scenarios can be run. Along with processors evolution, developers have implemented different virtualization mechanisms exhibiting enhanced performance from previous virtualized environments. Recently, operating system-based virtualization technologies captured the attention of communities abroad because their important improvements on performance area. In this paper, the features of three container-based operating systems virtualization tools (LXC, Docker and Singularity) are presented. LXC, Docker, Singularity and bare metal are put under test through a customized single node HPL-Benchmark and a MPI-based application for the multi node testbed. Also the disk I/O performance, Memory (RAM) performance, Network bandwidth and GPU performance are tested for the COS technologies vs bare metal. Preliminary results and conclusions around them are presented and discussed.
Downloads
References
[2] R. Uhlig et al., “Intel virtualization technology,” Computer (Long. Beach. Calif)., vol. 38, no. 5, pp. 48–56, 2005, doi:10.1109/MC.2005.163.
[3] M. G. Xavier, M. V Neves, F. D. Rossi, T. C. Ferreto, T. Lange, and C. A. F. De Rose, “Performance Evaluation of Container-Based Virtualization for High Performance Computing Environments,” in 2013 21st Euromicro International Conference on Parallel, Distributed, and Network Based Processing, 2013, pp. 233–240. doi: 10.1109/PDP.2013.41
[4]. Buyya, C. S. Yeo, and S. Venugopal, “Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services As Computing Utilities,” in Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications, 2008, pp. 5–13. doi:10.1109/HPCC.2008.17
[5] P. Mell and T. Grance, “The NIST Definition of Cloud Computing,” Gaithersburg, 2011.
[6] I. Foster, Y. Zhao, I. Raicu, and S. Lu, “Cloud Computing and Grid Computing 360-Degree Compared,” in 2008 Grid Computing Environments Workshop, 2008, pp. 1–10. doi: 10.1109/GCE.2008.4738445
[7] K. R. Jackson et al., “Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud,” in 2010 IEEE Second International Conference on Cloud Computing Technology and Science, 2010, pp. 159–168. doi: 10.1109/CloudCom.2010.69
[8] “Google google trends,” https://trends.google.com/trends/, Accessed: 2017-03-15.
[9] “LinuxContainers lxc linux containers,” https://linuxcontainers.org/, Accessed: 2017-03-15.
[10] D. Merkel, “Docker: lightweight Linux containers for consistent development and deployment,” Linux J., vol. 2014, Mar. 2014.
[11] M. Helsley, “LXC Linux container tools: Tour and set up the new container tools called Linux containers,” IBM Dev., pp. 1–10, 2009.
[12] G. M. Kurtzer, V. Sochat, and M. W. Bauer, “Singularity: Scientific containers for mobility of compute,” PLoS One, vol. 12, no. 5, pp. 1–20, May 2017, doi:10.1371/journal.pone.0177459.
[13] C. Ruiz, E. Jeanvoine, and L. Nussbaum, “Performance evaluation of containers for HPC,” in European Conference on Parallel Processing, 2015, pp. 813–824.
[14] D. Bernstein, “Containers and Cloud: From LXC to Docker to Kubernetes,” IEEE Cloud Comput., vol. 1, no. 3, pp. 81–84, 2014, doi: 10.1109/MCC.2014.51.
[15] Wikipedia, “Docker (software) — wikipedia, the free encyclopedia,” 2017, [Online; accessed 18-March-2017]. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Docker_ (software)&oldid=770287241
[16] M. Fowler and J. Lewis, “Microservices,” ThoughtWorks. http://martinfowler . com /articles/microservices. html [last accessed on February 17, 2015], 2014.
[17] Y. Gil et al., “Examining the Challenges of Scientific Workflows,” Computer (Long. Beach. Calif)., vol. 40, no. 12, pp. 24–32, 2007, doi: 10.1109/MC.2007.421
[18] “A container for hpc,” http://www.admin-magazine.com/HPC / Articles/Singularity-A-Container-for-HPC, accessed: 2017-03-18
[19] A. Petitet, “Hpl-a portable implementation of the highperformance linpack benchmark for distributed-memory computers,” http://www. netlib-. org /- benchmark/hpl/, 2004.
[20] P. Jones and D. Eastlake, “Network Working Group D. Eastlake, 3rd Request for Comments: 3174 Motorola Category: Informational,” 2001.
[21] J. Dongarra, “Preface: Basic Linear Algebra Subprograms Technical (Blast) Forum Standard,” Int. J. High Perform. Comput. Appl., vol. 16, no. 2, p. 115, May 2002, doi:10.1177/10943420020160020101.
[22] W. D. Norcott, “Iozone Filesystem Benchmark,” Best Open Source Software, 2012. .
[23] Docker, “Use the AUFS storage driver,” 2017. [online] Avaiable: https://docs.docker.com/storage/storagedriver/aufs-driver/
[24] J. D. McCalpin, “Memory bandwidth and machine balance in current high performance computers,” IEEE Comput. Soc. Tech. Comm. Comput. Archit. Newsl., vol. 2, no. 19–25, 1995.
[25] J. D. McCalpin, “STREAM: Sustainable Memory Bandwidth in High Performance Computers.” [online] available: https://www.cs.virginia.edu/stream/
[26] O. Micro-Benchmarks, “Osu network-based computing laboratory,” URL: http: //mvapich. cse. ohio-state. edu/benchmarks.
[27] E. Lindholm, J. Nickolls, S. Oberman, and J. Montrym, “NVIDIA Tesla: A Unified Graphics and Computing Architecture,” IEEE Micro, vol. 28, no. 2, pp. 39–55, 2008, doi:10.1109/MM.2008.31.
[28] D. Kirk, “NVIDIA CUDA software and GPU parallel computing architecture,” in ISMM, 2007, vol. 7, pp. 103–104.
[29] L. V Kalé, A. Bhatele, E. J. Bohm, and J. C. Phillips, “NAMD (NAnoscale Molecular Dynamics),” in Encyclopedia of Parallel Computing, D. Padua, Ed. Boston, MA: Springer US, 2011, pp. 1249–1254.
[30] “Nvidia docker: Gpu server application deployment made easy,” Feb 2017. [Onli-ne]. Available: https://devblogs.nvidia.com/parallelforall/ nvidia-docker-gpu-server-application-deployment-made-easy/
[31] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An updated performance comparison of virtual machines and Linux containers,” in 2015 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2015, pp. 171–172. doi:
10.1109/ISPASS.2015.7095802
[32] Z. Kozhirbayev and R. O. Sinnott, “A performance comparison of container-based technologies for the Cloud,” Futur. Gener. Comput. Syst., vol. 68, pp. 175–182, 2017, doi:10.1016/j.future.2016.08.025
[33] F. Moreews et al., “BioShaDock: a community driven bioinformatics shared Docker-based tools registry,” F1000Research, vol. 4, p. 1443, Dec. 2015, doi:10.12688/f1000research.7536.1.
[34] P. Belmann, J. Dröge, A. Bremges, A. C. McHardy, A. Sczyrba, and M. D. Barton, “Bioboxes: standardised containers for interchangeable bioinformatics software,” Gigascience, vol. 4, no. 1, Oct. 2015, doi:10.1186/s13742-015-0087-0.
[35] B. D. O’Connor et al., “The Dockstore: enabling modular, community-focused sharing of Docker-based genomics tools and workflows [version 1; peer review: 2 approved],” F1000Research, vol. 6, no. 52, 2017, doi:10.12688/f1000research.10137.1.
[36] P. Di Tommaso, M. Chatzou, E. W. Floden, P. P. Barja, E. Palumbo, and C. Notredame, “Nextflow enables reproducible computational workflows,” Nat. Biotechnol., vol. 35, no. 4, pp. 316–319, 2017, doi: 10.1038/nbt.3820.
[37] D. Jacobsen and S. Canon, “Contain This, Unleashing Docker for HPC,” 2015.
[38] D. Bahls, “Evaluating Shifter for HPC Applications,” St Paul.
[39] R. Priedhorsky and T. Randles, “Charliecloud: Unprivileged containers for user-defined software stacks in hpc,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2017, p. 36.
[40] O. Weidner, M. Atkinson, A. Barker, and R. Filgueira, “Rethinking High Performance Computing Platforms: Challenges, Opportunities and Recommendations,” 2017.
[41] S. T. Lee, C. Y. Lin, and C. L. Hung, “GPU-based cloud service for smith-waterman algorithm using frequency distance filtration scheme,” Biomed Res. Int., vol. 2013, pp. 1–9, 2013, doi: 10.1155/2013/721738.
[42] W. Sun, R. Ricci, and M. L. Curry, “GPUstore: harnessing GPU computing for storage systems in the OS kernel,” in Proceedings of the 5th Annual International Systems and Storage Conference, 2012, p. 9.
[43] W. Zhu, C. Luo, J. Wang, and S. Li, “Multimedia Cloud Computing,” IEEE Signal Process. Mag., vol. 28, no. 3, pp. 59–69, 2011, doi:10.1109/MSP.2011.940269.
[44] J. P. Walters et al., “GPU passthrough performance: A comparison of KVM, Xen, VMWare ESXi, and LXC for CUDA and openCL applications,” in IEEE International Conference on Cloud Computing, CLOUD, 2014, pp. 636–643. doi: 10.1109/CLOUD.2014.90