{"id":103856,"date":"2016-11-16T17:13:32","date_gmt":"2016-11-16T22:13:32","guid":{"rendered":"https:\/\/www.bates.edu\/news\/?p=103856"},"modified":"2024-07-08T14:41:13","modified_gmt":"2024-07-08T18:41:13","slug":"at-bates-supercomputing-is-for-everyone-not-just-superheroes","status":"publish","type":"post","link":"https:\/\/www.bates.edu\/news\/2016\/11\/16\/at-bates-supercomputing-is-for-everyone-not-just-superheroes\/","title":{"rendered":"At Bates, high-performance computing is for everyone \u2014 not just superheroes"},"content":{"rendered":"<p>The college&#8217;s new high-performance computing setup is not visually dazzling, at least compared with the huge computer we remember from Batman\u2019s Batcave, circa 1968. It occupies barely half of a 6-foot-tall computer rack.<\/p>\n<p>But then again, the computing setup, known as a &#8220;cluster&#8221; because it links 12 separate computers, doesn&#8217;t need to fill a whole cave. And perhaps most important, it&#8217;s a shared resource \u2014 not reserved for one or two superheroes.<\/p>\n<blockquote><p>\u201cOurs is for anyone who has the need to examine data in a deep way.&#8221;<\/p><\/blockquote>\n<p>While many colleges have high-performance computing clusters, it\u2019s not uncommon for them to be appropriated by a few faculty members (superheroes, as it were).<\/p>\n<p>Bates&#8217; HPCC, however, is designed as a &#8220;community-based resource as opposed to one that would just benefit specific faculty members,\u201d explains Andrew White, director of academic and client services for Information and Library Services. \u201cOurs is for anyone who has the need to examine data in a deep way.&#8221;<\/p>\n<div id=\"attachment_103858\" style=\"width: 910px\" class=\"wp-caption alignright\"><a href=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9271.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-103858\" class=\"size-large wp-image-103858\" src=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9271-900x600.jpg\" alt=\"Jeffrey Oishi, the college's new computational astrophysicist, visits the HPCC where it lives: in a ground-floor hub room in the the new residence hall at 65 Campus Ave. (Jay Burns\/Bates College)\" width=\"900\" height=\"600\" srcset=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9271-900x600.jpg 900w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9271-400x267.jpg 400w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9271-200x133.jpg 200w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9271.jpg 1620w\" sizes=\"(max-width: 900px) 100vw, 900px\" \/><\/a><p id=\"caption-attachment-103858\" class=\"wp-caption-text\">Jeffrey Oishi, the college&#8217;s new computational astrophysicist, visits the HPCC where it lives: in a ground-floor hub room in Kalperis Hall at 65 Campus Ave. (Jay Burns\/Bates College)<\/p><\/div>\n<p>One of those people is Jeff Oishi, the college&#8217;s new computational astrophysicist. He&#8217;ll be a power user, and some of his research funds helped to purchase the HPCC, which sits in a 15-by-17 hub room in the basement of Kalperis Hall, one of the college&#8217;s two new residences on Campus Avenue.<\/p>\n<p>Oishi calls the HPCC &#8220;the once and future&#8221; of computing. &#8220;Once&#8221; because sharing was once the way to access powerful computing, and &#8220;future&#8221; because it anticipates how Bates will use and expand the HPCC.<\/p>\n<p>When it comes to an HPCC, &#8220;high performance&#8221; has the same meaning as it does for a 1971 Plymouth Hemi Cuda muscle car. It means extraordinary power.<\/p>\n<p>For Oishi and his student researchers, this computing power will run models that explain how gases flow inside the atmospheres of giant planets like Jupiter. In one project, they will use an approximation known as linearization to model how gases go from stable to unstable. In a second project, Oishi\u2019s team will do 3D simulations of these transitions.<\/p>\n<p>When a big job comes to the HPCC from Oishi\u2019s lab in Carnegie Science, the job goes first to one of the 12 computers, each of which are known as &#8220;nodes.&#8221; The receiving node, called the &#8220;head node,&#8221; is the traffic cop that manages requests and draws on the other 11 nodes\u2019 computing power as needed.<\/p>\n<blockquote><p>In computer parlance, this gabfest is called \u201call-to-all\u201d communication and it\u2019s a hallmark of high-performance computing.<\/p><\/blockquote>\n<p>\u201cThe head node splits up the job, with pieces going to everybody else,\u201d explains Jim Bauer, director of network and infrastructure services for ILS. \u00a0\u201cThe nodes all communicate with each other, assemble the results, and send it back to the head node.\u201d<\/p>\n<p>As the job gets crunched and all the processors inside all the nodes swing into action, each processor is \u201ctalking with every other processor all the time,\u201d Oishi says. It&#8217;s an overlapping conversation, like a Robert Altman movie. In computer parlance, this gabfest is called \u201call-to-all\u201d communication and it\u2019s a hallmark of high-performance computing.<\/p>\n<p>The speed of this conversation depends on a dazzling array of pricey cables that connect the head node to the other nodes and each node to one another. So while you can use $2 HDMI cables for your home theater, in this case, \u201ccables do matter,\u201d says Bauer. Collectively called \u201cfabric,\u201d all this high-speed networking allows data to whip around at 100 gigabytes per second, the gold standard of high-speed computing.<\/p>\n<div id=\"attachment_103860\" style=\"width: 343px\" class=\"wp-caption alignright\"><a href=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9196.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-103860\" class=\"wp-image-103860\" src=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9196-600x900.jpg\" alt=\"web-161004_high_performance_computing_cluster_9196\" width=\"333\" height=\"500\" srcset=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9196-600x900.jpg 600w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9196-200x300.jpg 200w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9196-133x200.jpg 133w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/web-161004_high_performance_computing_cluster_9196.jpg 720w\" sizes=\"(max-width: 333px) 100vw, 333px\" \/><\/a><p id=\"caption-attachment-103860\" class=\"wp-caption-text\">High-speed (100 gigabytes per second) cables and networking, known as &#8220;fabric,&#8221; connect the 12 nodes of the HPCC.<\/p><\/div>\n<p>The Bates fabric is distinctive, Bauer says. \u201cWe went with new technology,\u201d known as Intel Omni-Path, \u201cthat is more cost-effective and twice as fast\u201d as typical fabric uses in an HPCC.<\/p>\n<p>In terms of computing specs, each of the 12 nodes in the Bates HPCC has 28 cores, the processing units that do the work, for a total of 336 cores. A team from Dell helped Bates design the setup, and it has 1.5 terabytes of RAM and 48 terabytes of disk space.<\/p>\n<p>By historical comparison, the first academic computer that Bates purchased, in 1979, was a Prime 550 with roughly three-quarters of a megabyte of RAM and 300 megabytes of storage space.<\/p>\n<p>&#8220;It was a leader of its day,&#8221; says Bauer, and it could process 700,000 &#8220;instructions&#8221; per second. &#8220;The HPCC runs at just under six billion instructions per second.&#8221;<\/p>\n<p>That\u2019s impressive, but computing power is relative, of course. To meet computing needs that are even bigger, Oishi uses NASA\u2019s Pleiades Supercomputer, which has nearly 200,000 cores compared to Bates&#8217; 336. \u201cBut otherwise, it\u2019s built on nodes almost identical to ours,\u201d he says.<\/p>\n<p>As he meets the Bates HPCC for the first time, Oishi says he&#8217;s \u201cimpressed by how small the cluster is\u201d compared with some that he\u2019s used in the past.<\/p>\n<p>Since computers today use so much less power than their forebears, they give off less heat, which means the components of an HPCC can be packed close together \u2014 notwithstanding the air-conditioning that blasts away from one corner of the room.<\/p>\n<p>Had Oishi arrived at Bates as a solo computational physicist, the college might have created a one-off HPC setup for him. But Oishi \u2014 who is the co-principal investigator of a NASA grant that will send $114,000 to Bates to support his research on stellar magnetism \u2014 was joined this fall by another new assistant professor of physics, Aleks Diamond-Stanic.<\/p>\n<p>Himself the holder of a $108,000 grant from the Space Telescope Science Institute to study the stellar mass of starbursts, Diamond-Stanic uses big data from the Hubble Space Telescope and the Sloan Digital Sky Survey to study the evolution of galaxies and supermassive black holes. So he, too, had big-time computing needs.<\/p>\n<blockquote><p>The Bates HPCC setup is known as the \u2018condo\u2019 model: While the infrastructure belongs to the college and professors sort of \u201cbuy into it.&#8221;<\/p><\/blockquote>\n<p>The pair\u2019s arrival \u2014 and the anticipation that more and more faculty will need powerful computing resources \u2014 created a critical mass for the HPCC project, says White. \u201cWhen Jeff talked to us about his needs, and Aleks talked about his needs, we saw a way to support them while serving the larger Bates community, too.\u201d<\/p>\n<div id=\"attachment_103865\" style=\"width: 224px\" class=\"wp-caption alignright\"><a href=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/160818_Aleksandar_Diamond-Stanic_005-copy.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-103865\" class=\"wp-image-103865 size-medium\" src=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/160818_Aleksandar_Diamond-Stanic_005-copy-214x300.jpg\" alt=\"160818_aleksandar_diamond-stanic_005-copy\" width=\"214\" height=\"300\" srcset=\"https:\/\/www.bates.edu\/news\/files\/2016\/10\/160818_Aleksandar_Diamond-Stanic_005-copy-214x300.jpg 214w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/160818_Aleksandar_Diamond-Stanic_005-copy-643x900.jpg 643w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/160818_Aleksandar_Diamond-Stanic_005-copy-143x200.jpg 143w, https:\/\/www.bates.edu\/news\/files\/2016\/10\/160818_Aleksandar_Diamond-Stanic_005-copy.jpg 771w\" sizes=\"(max-width: 214px) 100vw, 214px\" \/><\/a><p id=\"caption-attachment-103865\" class=\"wp-caption-text\">Assistant Professor of Physics Aleks Diamond-Stanic will use the HPCC to crunch big data from the Hubble Space Telescope and the Sloan Digital Sky Survey. (Josh Kuckens\/Bates College)<\/p><\/div>\n<p>In fact, White, Bauer, and others had already been testing some ideas for HPC at Bates. \u201cThanks to Aleks and Jeff, we went from zero to 60 just like that,\u201d White says.<\/p>\n<p>The Bates HPCC setup is known as the \u2018condo\u2019 model: The infrastructure belongs to the college, and professors sort of \u201cbuy into it,\u201d Oishi says. That is, he and Diamond-Stanic, have contributed some of their startup funds \u2014 research dollars that the college provides to new faculty \u2014 to purchase new nodes. In return, they&#8217;ll have priority access.<\/p>\n<p>\u201dI think that\u2019s a really great model because it\u2019s expandable and it\u2019s flexible,\u201d says Oishi. Indeed, the fact that the HPCC rack is currently only half-full anticipates that more faculty will buy into the &#8220;condo,&#8221; especially professors in the college&#8217;s new Digital and Computational Studies Program, slated to debut as a major in fall 2018.<\/p>\n<p>In that sense, the Bates HPCC will \u201csupport our current faculty and help us attract new colleagues,\u201d says White.<\/p>\n<p>Thirty years ago, if you wanted access to powerful computing, you tapped into a shared resource. By the 1990s, when Oishi was at the American Museum of Natural History, things had changed.<\/p>\n<p>At the museum, he worked with a computational biologist, Ward Wheeler, who was custom-building his own HPCCs to research the evolution of tree DNA over the past 500 million years. Ward\u2019s budget wasn\u2019t huge, Oishi recalls, \u201cbut he realized that he could order parts,\u201d such as processors and cases \u201cfrom a commodity source, get a team of people in his office with screwdrivers, and put them all together over a weekend.\u201d<\/p>\n<p>Fast forward a decade, to when Oishi was a postdoc at Berkeley, and every researcher with any kind of budget was building their own HPCC. \u201cIt fell to the faculty member or a grad student to maintain them because the IT department would say, \u2018Do whatever you want but we\u2019re not touching that. You brought it in, you built it, you put it in your closet.\u2019 And we were running out of closet space.\u201d<\/p>\n<p>That created waste and redundancy, so by around 2010 colleges and universities began to move back to offering shared high-performance computing resources. \u201cThey stepped in and said, \u2018Let us do this for you,\u2019\u201d Oishi says.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The college&#8217;s new high-performance computing cluster is a &#8220;once and future&#8221; proposition.<\/p>\n","protected":false},"author":104,"featured_media":103859,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_hide_ai_chatbot":false,"_ai_chatbot_style":"","associated_faculty":[],"_Page_Specific_Css":"","_bates_restrict_mod":false,"_table_of_contents_display":false,"_table_of_contents_location":"","_table_of_contents_disableSticky":false,"_is_featured":false,"footnotes":"","_bates_seo_meta_description":"","_bates_seo_block_robots":false,"_bates_seo_sharing_image_id":0,"_bates_seo_sharing_image_twitter_id":0,"_bates_seo_share_title":"","_bates_seo_canonical_overwrite":"","_bates_seo_twitter_template":""},"categories":[4],"tags":[11368,11242],"class_list":["post-103856","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-academic-life","tag-aleksandar-diamond-stanic","tag-jeffrey-oishi"],"_links":{"self":[{"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/posts\/103856","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/users\/104"}],"replies":[{"embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/comments?post=103856"}],"version-history":[{"count":22,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/posts\/103856\/revisions"}],"predecessor-version":[{"id":104584,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/posts\/103856\/revisions\/104584"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/media\/103859"}],"wp:attachment":[{"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/media?parent=103856"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/categories?post=103856"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/tags?post=103856"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}