Supercomputers – Fun With Justin Wed, 04 Aug 2021 09:43:26 +0000 en-US hourly 1 Supercomputers – Fun With Justin 32 32 Running quantum software on a classical computer – sciencedaily Tue, 03 Aug 2021 21:54:26 +0000 In an article published in Quantum information about nature, EPFL professor Giuseppe Carleo and Matija Medvidovi ?, a graduate student at Columbia University and the Flatiron Institute in New York, have found a way to run a complex quantum computing algorithm on traditional computers instead of ‘quantum computers.

The specific “quantum software” they envision is known as Quantum approximate optimization algorithm (QAOA) and is used to solve classical optimization problems in mathematics; it is essentially a way of choosing the best solution to a problem from a set of possible solutions. “There is a lot of interest in understanding what problems can be solved efficiently by a quantum computer, and QAOA is one of the more prominent candidates,” Carleo said.

Ultimately, QAOA is meant to help us on our way to the famous “quantum acceleration,” the predicted increase in processing speed that we can achieve with quantum computers instead of conventional computers. Naturally, QAOA has a number of supporters, including Google, who have their sights set on quantum technologies and computing in the near future: in 2019, they created Sycamore, a 53-qubit quantum processor, and used it. to perform a task that he estimated. It would take about 10,000 years for a conventional state-of-the-art supercomputer. Sycamore completed the same task in 200 seconds.

“But the barrier of ‘quantum acceleration’ is almost rigid and it is continually being reshaped by new research, also thanks to advances in the development of more efficient classical algorithms,” explains Carleo.

In their study, Carleo and Medvidovi? answer a key open question in the field: can algorithms running on current and short-term quantum computers offer a significant advantage over classical algorithms for tasks of practical interest? “If we are to answer this question, we must first understand the limits of classical computing in the simulation of quantum systems,” Carleo explains. This is all the more important as the current generation of quantum processors operate in a regime where they make errors when executing quantum “software” and can therefore only execute algorithms of limited complexity.

Using conventional computers, the two researchers developed a method capable of roughly simulating the behavior of a special class of algorithms called variational quantum algorithms, which are means of determining the lowest energy state. , or “ground state” of a quantum system. QAOA is an important example of such a family of quantum algorithms, which researchers believe are among the most promising candidates for a “quantum advantage” in quantum computers in the short term.

The approach is based on the idea that modern machine learning tools, for example those used in learning complex games like Go, can also be used to learn and mimic the inner workings of a quantum computer. The key tool for these simulations is Neural Network Quantum States, an artificial neural network that Carleo developed in 2016 with Matthias Troyer, and which is now used for the first time to simulate QAOA. The results are considered to be in the field of quantum computing and constitute a new benchmark for the future development of quantum hardware.

“Our work shows that the QAOA that you can run on current and short-term quantum computers can also be simulated, with good precision, on a conventional computer,” Carleo explains. “However, this does not mean that all useful quantum algorithms that can be executed on short-term quantum processors can be emulated in the classical way. In fact, we hope that our approach will serve as a guide to design new quantum algorithms at the same time. both useful and difficult. simulate for conventional computers. “

Source of the story:

Material provided by Federal Institute of Technology in Lausanne. Original written by Nik Papageorgiou. Note: Content can be changed for style and length.

Source link

These Scientists Use VR to Develop COVID Drug Tue, 03 Aug 2021 11:00:52 +0000

Last year, when Italy was under siege by COVID-19, scientists from Exscalate4Cov, a public-private consortium of 18 institutions across Europe led by Italian pharmaceutical company Dompé farmaceutici, had just started the hunt for find a cure for COVID-19. Eight scientists, all from across Europe, gathered in a virtual room to discuss potential molecules. Each scientist held up a 3D render of a molecule they simulated and walked through it with the others. Inside that space, scientists could together scour these molecules, separate them, enlarge them, and link them to possible compounds. They asked each other questions and, on a virtual whiteboard, outlined the possibilities for success and failure in each complex. This virtual framework also allowed them to compare molecules side by side.

Armed with $ 3 million in funding from the European Union, the group collected treatment suggestions and analyzed those suggestions using supercomputers. In October, they submitted their first candidate for a Phase III clinical trial in Europe: a generic osteoporosis drug called Raloxifene.

The trial is now over. “We are awaiting the final results, but we are very confident about the possible success of the clinical trial,” says Andrea Beccari, scientific manager at Exscalate and head of research and development platforms at Dompé farmaceutici. The result will not only determine whether raloxifene will work against COVID-19, but it could also inform the design of new drugs.

To create a new drug, scientists first look at how a disease enters human cells, and then devise a mechanism to interfere with that infection. Traditionally, they’ve done it on paper, sketching out proteins and simulating how a molecule or compound might bind to them. Current software often does not provide enough visual landscape for scientists to understand the full extent of the relationship between molecules, especially those with multiple bonding sides. That’s why Exscalate worked with a company called Nanome, which hopes to accelerate drug development by providing scientists with a way to visualize molecules in three-dimensional space on an Oculus headset.

Beccari said that using supercomputers, the group took a list of 400,000 potential molecules and simulated their ability to cling to proteins of the COVID-19 virus. In addition to analyzing them via computers, they also used virtual reality to better understand how these compounds might bind to COVID-19 viral proteins and how they would work in humans. What was important to predict was whether a drug would be able to reach the lungs.

“For example, Remdesivir, which is a very good antiviral molecule, has very little effect on humans simply because it does not reach the lungs in sufficient concentration,” explains Beccari. But in their machine-learning-based analysis, they found a family of molecules capable of inhibiting the virus and reaching the lungs, he says. The first of these molecules is raloxifene.

“Computers always generate solutions,” Beccari says. “But all of these simulations aren’t good just because the computer says so.”

Beccari says the platform gives scientists a lot more information than they can easily glean from a two-dimensional format. This ultimately speeds up their ability to sift through molecules their supercomputers suggest as plausible candidates. In the future, he would like to see 3D platforms like Nanome integrate with other platforms and tools. For example, his organization created an ultra-fast algorithm to understand the docking of molecules. It would be great, he says, to do both their IT work and their collaborative work in one space.

Going forward, the group will work on designing drugs similar to raloxifene that enhance its current capabilities against COVID-19. In this context, says Becarri, collaboration between scientists will be particularly essential. “In the age of artificial intelligence, we think people still rule,” he says.

Source link

The whole world is watching: Bernholdt and ORNL programming environment team get ready for Frontier and Exascale Thu, 22 Jul 2021 15:52:42 +0000

David Bernholdt from ORNL

This article is by Matt Lakin, science editor at Oak Ridge National Laboratory

The world’s fastest supercomputer comes with some assembly required.

Frontier, the country’s first exascale computer system, will not be assembled as a whole until all parts have been delivered to the US Department of Energy’s (DOE) Oak Ridge National Laboratory for installation – under the eyes of the world – on the data center. floor inside the Oak Ridge Leadership Computing Facility (OLCF).

Once these components work in harmony as advertised, David Bernholdt and his team can take the time to bow quickly and then get back to work.

Bernholdt and his team are leading efforts to ensure that the myriad of compilers, performance tools, debuggers, and other pieces of the Frontier puzzle all fit together to deliver the peak results expected. The feeling can be like a refueling team tweaking a race car, except this car is the first of its kind and heads for the last lap while they are working on it.

HPE Cray System Promises Computing Speeds Over 1 Quintillion Calculations Per Second – More Than 1018, or one billion billion operations, and could help solve problems across the scientific spectrum when Frontier opens its doors to full user operations in 2022.

“When the DOE started to think seriously about exascale, around 2009, we wondered if it would even be possible to do it,” said Bernholdt. “I always thought we would find a way. Now the technology has evolved to the point that it’s not as scary to build, program, and debug as we thought it was, and we’re coming to the finish line. It has been quite a journey.

Getting Frontier to the finish line for his team includes interacting with users – scientists and engineers keen to enter their codes and run high-speed simulations of everything from weather models and stages of cancer to nuclear and collapsing star reactions – and corporate suppliers work tirelessly to meet specifications and deliver a one-of-a-kind product. As the OLCF encourages standards-based programming environments where possible, the team also works closely with various standards organizations to ensure that these standards reflect user needs and optimize uptake. load of the material capacities of the suppliers.

“It’s sometimes intimidating to remember: it’s serial number 1,” said Bernholdt. “They created this machine just for us, and even the best hardware and software won’t be perfect, at least not right out of the box. A big part of what we do is try to understand what users will need from the system in order to use it effectively and help them represent it in their codes.

“At the same time, we are working with our vendors and compilers to make sure their solutions implement the standards we need and deliver the necessary performance on the system. We need to make sure that vendors provide enough detailed and granular information about the system on time for software developers to take advantage of, and we need to make sure that the languages ​​have evolved enough to perform the tasks. There are a lot of moving parts, and they just speed up as we go along.

Bernholdt’s scientific training prepared him for the mission. He spent his early research career in computational chemistry and helped develop NWchem, an evolving software package for computational chemistry still in use around the world. Plans call for a revamped version of the package to run on Frontier and other exascale supercomputers, such as Aurora.

Bernholdt then turned to computer science and software development to design and refine tools that tackle the same problems he encountered as a scientific user. He and his team helped program and debug Frontier’s supercomputing predecessors: Titan (27 petaflops, or 27 quadrillion calculations per second) and Summit (200 petaflops, or 200 quadrillion calculations per second).

“This is our third accelerator-based machine, so we have a pretty good idea of ​​how to program them,” said Bernholdt. “The biggest challenge has been the timing. We had maybe half the time to prepare Frontier that we had for Summit, and it’s a new software stack that had everyone to scramble. But that means more opportunities and incentives for optimization.

This 24 hour job will not end when Frontier turns on. Bernholdt and his team will continue to monitor supercomputer performance and look for ways to raise standards and improve performance.

“It never stops,” Bernholdt said. “It’s always very satisfying to see people able to use the system wisely, but that’s not the end of it. Frontier will continue to evolve and improve, and we will be a part of it. I feel pretty confident in saying that there is no other place on earth right now that could support a similar project of this scale and importance.

Source link

No needles? No problem. This COVID-19 vaccine could be inhaled. Tue, 20 Jul 2021 21:12:40 +0000

Scientists have come up with a new way to get vaccinated against the coronavirus that causes COVID-19, and it comes with a twist: No needles are needed.

Rather, this vaccine would be aerosolized so that it could be inhaled by a patient.

Paul Whitford, associate professor of physics at the College of Science and the Center for Theoretical Biological Physics at Northeastern. Courtesy photo

The researchers tested this vaccination strategy in mice and it elicited a strong immune response. A team led by researchers from Northeastern University, Rice University and Rutgers University published a proof of concept study in the review Proceedings of the National Academy of Sciences this week. The project is still in its early stages, but the team sees the vaccine they are developing as a way to expand the reach of COVID-19 vaccines around the world.

“If we can have this new tool, that would be great. It’s easy to produce, easy to ship, easy to administer, ”says Paul Whitford, associate professor of physics at Northeastern and author of the new paper. Such an inhalable COVID-19 vaccine would not require precise refrigeration of existing inoculations and could be more easily dispersed in rural and remote communities. “You just need basic instructions on how to use an inhaler.”

The team’s vaccine strategy uses modified bacteriophage particles to deliver instructions to the immune system – via the lungs – to develop a protective response to SARS-CoV-2, the coronavirus that causes COVID-19.

Bacteriophage particles (or phage particles for short) are viruses that infect bacteria but are harmless to humans and have been used to treat bacterial infections in humans for a century.

In this new vaccine strategy, a phage particle in the immunizing mist is much like a visitor knocking on the door of lung tissue. He has an outstretched arm to accommodate lung tissue and a backpack full of immune instructions on his back, explains Whitford.

This image shows SARS-CoV-2 (round blue objects) emerging from the surface of cells grown in the laboratory.  SARS-CoV-2 is the virus that causes COVID-19.  The virus shown was isolated from a patient in the United States Photo by: National Institute of Allergy and Infectious Diseases

The phage particles have been engineered to contain a protein (the metaphorical arm) that lung cells will recognize and attract into the recipient’s bloodstream. “You have to reach out and say, ‘Let me in! “, He said. “And then, ‘Okay, I have something for you.'”

This “something” is a precious cargo: tiny pieces of the SARS-CoV-2 spike protein. But it’s not just any song. This is called an “epitope”. This is the part of the invasive protein where an antibody can attach itself to the offending viral cell to prevent it from infecting one of our cells.

The idea is to pass these parts of the virus to the body’s immune system to give it some sort of practice to fight SARS-CoV-2. That way, says Whitford, if you’re exposed to the real virus, your immune system will know what to do immediately.

But there is a wrinkle. The spike protein contains many different epitopes. And some of them lose their shape (and therefore their properties) when you remove them from the rest of the virus.

Whitford and his colleagues from Center for Theoretical Biological Physics, hosted in both Northeastern and Rice, turned to supercomputers. The team performed simulations of what would happen when certain selected epitopes were transferred to a phage. Their analysis identified which epitope would retain its structure and best train the immune system to attack the true SARS-CoV-2. Then Rutgers’ experimental team developed the vaccine and tested it on mice.

“Practically, experimentally, you can’t make a thousand candidate vaccines and test them all just to see which one works,” says Whitford. “You cannot use this lots of mice just to see if that will work.

The recently published study is largely preliminary, as there are many other epitope candidates that the team has yet to examine. Sorting out all the possible configurations using supercomputers is the next step, says Whitford. “This study provides a kind of proof of principle that this is a decent strategy,” he says. “It was our first visit.”

For media inquiries, please contact Marirose Sartoretto at or 617-373-5718.

Source link

Tachyum Triples Company Valuation at End of Series B Funding Cycle Tue, 20 Jul 2021 18:28:57 +0000

LAS VEGAS, July 20, 2021 – Tachyum today announced that it has closed its Series B round, led by private equity investor IPM Group in cooperation with Slovak wealth manager, Across Private Investments. The latest funds raised during the round will be used to finalize Prodigy as it moves from successfully demonstrating a prototype FPGA to registering and then manufacturing the world’s first universal processor chip. Tens of millions of dollars are raised in Series B.

The IPM Group is a global asset and wealth management company and IPM isdreams as a key strategic partner of Tachyum, in particular to advance a supercomputer project based in Slovakia as well as the global positioning of the European Union as a leading technology hub.

Close collaboration between IPM and Tachyum has enabled advancements in the data center market with InoCloud – an IPM company that aims to acquire and build high performance, energy efficient data centers where Prodigy will be deployed. An HPC cluster using Prodigy is being deployed across the EU as part of the NSCC Slovakia supercomputer project with a baseline design expected in the first half of 2022 before becoming fully operational later in the year.

“We are delighted to strengthen our position as a lead investor in the Series B cycle of Tachyum and to support Tachyum in its further development on the path to production. With the ever increasing demand for cloud computing and the associated environmental concerns, Tachyum’s vision to provide a revolutionary solution for sustainable digitization is more than ever needed, ”said Adrian Vycital, Managing Partner of IPM Group and Board Member of administration of Tachyum.

Tachyum’s technological leadership and the recruitment of top talent both globally and in Slovakia have made it one of the most capital efficient companies. Tachyum has successfully developed a proprietary instruction set architecture (ISA) and processor architecture that its closest competitors cannot achieve, despite spending much more. Tachyum increased its valuation by 3x in USD.

Tachyum is planning a Series C funding round to provide the capital needed to achieve profitability.

Prodigy has the potential to create unmatched compute speed and vast power saving capabilities for the hyperscale, OEM, telecommunications, private cloud and government markets. Prodigy’s 10 times lower CPU core power consumption will significantly reduce the carbon emissions associated with data center use. Prodigy’s 3X lower cost (at equivalent performance) will also translate into billions of dollars in annual savings for hyperscalers like Google, Facebook, Amazon, and Alibaba.

Tachyum’s Prodigy processor can run HPC applications, convolutional AI, explainable AI, general AI, bio AI, and spiked neural networks, as well as normal data center workloads, on a single platform – homogeneous processor shape, using existing standard programming models. Without Prodigy, hyperscale data centers must use a mix of disparate CPU, GPU, and TPU hardware, for these different workloads, creating inefficiency, expense, and complexity of separate provisioning and maintenance infrastructures. Using specific hardware dedicated to each type of workload (eg, data center, AI, HPC) results in underutilization of hardware resources and more difficult programming, support, and maintenance. Prodigy’s ability to seamlessly switch between these different workloads dramatically changes the competitive landscape and the economics of data centers.

“Funding an advanced stage company like Tachyum is a vote of confidence that all of the work and progress we have made to date will continue to deliver exponential value to investors in the future,” said Dr Radoslav Danilak, founder and CEO of Tachyum. “IPM has been a strong ally and a key partner. We are excited to share a vision of how data centers can be transformed into universal data centers and look forward to continuing our work to enable human brain scale AI and move the world forward into an era. greener computing.

About Tachyum

Tachyum is disrupting the data center, HPC and AI markets by delivering the world’s first universal processor, with industry-leading performance, cost and power, across all three compute domains, while enabling data centers to data to exceed the capacity of the human brain. Tachyum, co-founded by Dr Radoslav Danilak, and its flagship Prodigy, begin high-throughput production in 2021, with software emulations and an FPGA-based emulator available to early adopters. He is targeting a $ 50 billion market, growing at 20% per year. With data centers currently consuming more than 3% of the planet’s electricity, and expected to reach 10% by 2025, the Ultra Low Power Prodigy Universal Processor is essential if we are to continue to double global center capacity. data every four years. Tachyum is one of the founding members of I4DI (Innovations for Digital Infrastructure), which will build the world’s fastest AI supercomputer in Slovakia, starring Prodigy. Tachyum has offices in USA and Slovakia, EU. For more information, visit

Source: Tachyum

Source link

Israeli startup aims to be the Mellanox of quantum – Sponsored Content Tue, 20 Jul 2021 06:56:15 +0000

In 2020, Israeli IT company Mellanox hit the headlines when it was bought out for $ 7 billion by US IT giant Nvidia. Mellanox allows the construction of supercomputers by linking many powerful computers together.

Aharon Brodutch, CEO and co-founder of Canadian-Israeli startup Entangled Networks, hopes his company can emulate that success by doing a similar task for quantum computers – widely regarded as the ultra-supercomputers of tomorrow.

The promise of quantum computing – based on the behavior of subatomic particles – is huge, but so far it has failed to deliver on its promises. In theory, a quantum computer could reduce the time of complex calculations from several years to “seconds,” according to Google CEO Sundar Pichai. But large quantum machines that could solve complicated problems, develop new materials, and transmit hacker-proof data, are too fragile to be built. They require very cold temperatures and an isolated and quiet environment to operate.

“Every component you add starts to make noise, interfering with other components,” explains Brodutch. “It’s incredibly difficult to grow taller and taller and reduce noise. This is essentially what keeps quantum computers from getting huge in the very near future. “

In quantum computing, information is encoded in qubits, the quantum equivalent of bits in classical computing that have a value of zero or one. With today’s technology, if more than a few dozen qubits are included in the core of a quantum computer, the resulting noise and vibration prevent the machine from performing any serious calculations.

“These computers have insane computing power. But there is a catch: they need millions of qubits to solve these problems, and currently leading systems have less than 100 qubits. How do you take these sophisticated science toy projects and expand them to the point where they can solve problems that are fundamentally unimaginable for conventional computers? Said Brodutch. “You have to go beyond being a science toy, and that’s the solution we’re offering. “

Entangled Networks develops hardware and software to connect multiple quantum computers to maximize their potential. The company is developing interconnects – hardware that connects many smaller quantum computers – potentially allowing thousands or millions of qubits to work together without creating the noise that would disturb them if they were all part of the same device.

“This will create the holy grail of the industry,” said Eli Nir, general partner and investment manager of the Jerusalem-based equity investment platform OurCrowd, which backs the company. “The biggest challenge in quantum computing today is the need to evolve. Tangled is perhaps the only viable approach to solving this problem. “

IBM and other leaders in quantum computing expect to be able to create computers with up to 1,000 qubits by 2024, but the millions of qubits needed to maximize quantum potential remain elusive.

Dilution refrigerator used to cool Intel quantum systems to create the ideal environment for optimal qubit performance. (Intel company)

“Going beyond a thousand qubits to reach a million will require a new, bolder vision of integration that has not yet been defined in an evolutionary way,” said Nadav Katz, a quantum information physicist at Hebrew University of Jerusalem. “There are some very exciting ideas on how to do this, but this is the next scientific leap that has to happen.”

Brodutch says Entangled Networks has the answer.

“A single-core solution just can never scale to the point where you can have the kind of computer that can solve unimaginable problems,” he says. Entangled’s approach will advance quantum computers “generations in terms of computing power” and will likely be available in a few years, well before the development of larger computing cores, he says.

Brodutch and Nir say the comparison with Mellanox, the third-largest tech release in Israel’s history, is valid.

“The same solution will exist in the quantum,” says Brodutch. “But it’s a whole different task in terms of hardware.”

Based on the principles of quantum mechanics first described by Einstein and his peers, quantum computers use particles like photons, electrons, and atoms to encode information, offering significant advantages in speed and safety. compared to conventional computers. Instead of encoding information in binary digits, or sets of zeros and ones, called bits, each quantum bit or quibit can be not only a zero or a one, but also consistently suspended anywhere in between. states, dramatically increasing computing power and memory while simultaneously processing an exponential number of possible states.

Members of the IBM Quantum team are studying how to control increasingly large qubit systems for long enough and with enough error to perform the complex calculations required by future quantum applications. (Connie Zhou for IBM)

Quantum networks could also offer major security benefits, as information would travel in photons of light that may be physically distant from each other, but linked together by a concept called entanglement. If a hacker attempted to access these particles, the entanglement would be damaged and the information would be scrambled, allowing eavesdropping to be detected and prevented.

Although most quantum computers are still confined to labs, the market is growing by more than 30% each year and is expected to reach $ 1.7 billion by 2026, according to MarketsandMarkets. Quantum computing is poised to transform many industries, starting with chemical and pharmaceutical research, and “anything that requires in-depth chemistry simulations at the atomic level,” explains Brodutch.

“If you’re trying to develop a new drug, or a new battery, or a new fertilizer or a new material, what you do today is go to the lab and start experimenting, but it costs very expensive and time consuming. -consume, ”he explains. “What you would like to do is run a simulation on your computer. But a typical computer simply cannot meet this challenge, not even a supercomputer.

But a quantum computer could.

“You just run simulations on your quantum computer until you find the solution, the right molecule,” he says.

The US Department of Energy, with 50 partner organizations, is building a nationwide quantum internet with a more secure mechanism for financial transactions and other sensitive data, eliminating the need for encryption. Google Inc., which was the first in the world to demonstrate the advantage of quantum computers over ordinary computers, started a quantum computing division, as did IBM, Intel, and Microsoft. Chinese scientists have used quantum computers to perform calculations that would be impossible for traditional computers.

“Quantum computing is making steady and remarkable progress,” Katz says. “However, it is important to understand the magnitude of the scientific and technical challenge involved. It is extraordinarily difficult and it will take years of continuous effort to move forward. “

Quantum computing could contribute to the fields of artificial intelligence, cybersecurity, financial modeling, logistics optimization and many more, Katz explains.

For more information on tangled networks and investing through OurCrowd, click HERE

Source link

The future of supercomputing is happening now … Mon, 19 Jul 2021 13:16:06 +0000

Promo If you want to reap the benefits of accelerated computing and high bandwidth memory as well as the growing wave of Arm-based computation, you don’t have to wait, you don’t need to buy systems. CPU-GPU, and you don’t need to adopt a complex hybrid programming model.

All you need is an HPE Apollo 80 with the A64FX processor – the same processor used in the “Fugaku” system built by Fujitsu at the RIKEN Lab in Japan, which is the fastest supercomputer in the world – and you can get started right away. today.

In fact, Hewlett Packard Enterprise has been at the forefront of Arm computing for several years, particularly in supercomputing, and has extended the Apollo 80 system to bring the substantial benefits of the A64FX architecture to more general purpose HPC clusters.

Many HPC applications have memory bandwidth constraints as well as intense compute requirements, which means that many existing CPU architectures force customers to over-provision memory to increase bandwidth. But the Apollo 80 with the A64FX system brings the advantages of GPUs (Massively Parallel Vector Engines and High Bandwidth Memory) to CPUs, eliminating the need for a hybrid architecture.

But how does it all work in practice? You can find out in detail how HPC centers in the US and Europe tested Apollo 80 systems with A64FX with their own code, ported HPC applications, executed hackathons and achieved incredible results, by joining our TNP discussion panel. in July. 22 at 9:00 a.m. PDT / 12:00 p.m. EDT / 5:00 p.m. BST, taking place on the homepage of The Next Platform.

Participants include:

Simon mcintosh smith, Professor of High Performance Computing and Head of the Microelectronics Group at the University of Bristol

Robert harrison, Professor in the Department of Applied Mathematics and Statistics and Founding Director of the Institute for Advanced Computational Science at Stony Brook University

Sharda Krishna, Senior Director of HPC and AI Computing Products at Hewlett Packard Enterprise

The procedures will be supervised by our own Timothy prickett morgan, co-editor of The Next Platform.

Don’t miss out on what promises to be a heated discussion of the issues of porting code to the A64FX architecture and the performance benefits that real users see today with this architecture.

To guarantee your place, you just need register here. After that? It is a rapid advance into the future.

Sponsored by HPE

Subscribe to our newsletter

Featuring the week’s highlights, analysis, and stories straight from us to your inbox with nothing in between.
Subscribe now

Source link

Iran plans to build “Maryam Mirzakhani” supercomputer Sun, 18 Jul 2021 12:40:00 +0000

According to Ehsan Aryanian, head of computing platforms at the ICT Research Institute, the first calls for research and design of the “Maryam Supercomputer” have been made.

After receiving the proposals, evaluating them and receiving the necessary funds, the institute will start building the supercomputer with the cooperation of the applicants, he told the Mehr news agency on Sunday.

The Maryam supercomputer will be more powerful than the recently unveiled Simorgh, Aryanian noted without providing details on its processing power.

The Simorgh Supercomputer was inaugurated in mid-May. Simorgh’s current processing power is over one petaflops (i.e. one billion millions (1015) of floating point operations per second). Simorgh supports companies with the aim of developing artificial intelligence. He is able to perform a variety of tasks including big data analysis, artificial intelligence, Internet of Things (IoT), and genetic data analysis.

Aryanian also highlighted plans to improve the Simorgh supercomputer, hoping the device could be listed in the Top500 project after the updates.

In 2017, Mirzakhani, winner of the Fields Medal, also known as the Nobel Prize in Mathematics, died of breast cancer at age 40. She won a gold medal at the International Mathematical Olympics in Hong Kong in 1994 for being the first Iranian female student to win a gold medal.

At the International Mathematical Olympics in Toronto in 1995, she became the first Iranian student to win two gold medals. She obtained her bachelor’s degree in mathematics from Sharif University of Technology, Tehran, in 1999. Mirzakhani then moved to the United States and completed a doctorate. from Harvard University in 2004.

MAH / 5260165

Source link

What happened to IBM’s Watson? Sat, 17 Jul 2021 14:48:00 +0000

IBM insists its revised AI strategy – a lean, less world-changing ambition – is working. The task of restarting growth was given to Arvind Krishna, an IT specialist who became CEO last year, after leading the recent overhaul of IBM’s cloud and AI business.

But the great visions of the past have disappeared. Today, instead of being a shortcut for tech prowess, Watson stands out as a sobering example of the pitfalls of tech hype and pride around AI.

It turns out that the march of artificial intelligence through the mainstream economy will be more of a step-by-step evolution than a cataclysmic revolution.

Time and time again in its 110-year history, IBM has pioneered new technologies and sold them to businesses. The company has so dominated the mainframe market that it has been the target of a federal antitrust case. PC sales really took off after IBM entered the market in 1981, making small machines essential tools in corporate offices. In the 1990s, IBM helped its traditional business customers adapt to the Internet.

IBM executives have come to see AI as the next wave to ride.

Mr Ferrucci first introduced Watson’s idea to his bosses in IBM’s research labs in 2006. He believed that building a computer to tackle a question-and-answer game might do the trick. advancing science in the field of AI known as natural language processing, in which scientists program computers. recognize and analyze words. Another research objective was to advance automated question answer techniques.

After overcoming initial skepticism, Ferrucci assembled a team of scientists – ultimately more than two dozen – who worked in the company’s lab in Yorktown Heights, NY, about 20 miles north of IBM headquarters. in Armonk.

The Watson they built was a room-sized supercomputer with thousands of processors running millions of lines of code. Its storage drives were filled with digitized reference books, Wikipedia entries, and e-books. Computational intelligence is a matter of brute force, and the massive machine required 85,000 watts of power. The human brain, on the other hand, runs on the equivalent of 20 watts.

Source link

Supercomputer Analysis Probe Anti-Vaccination Twitter Campaign Fri, 16 Jul 2021 23:33:40 +0000

In early March 2020, on the cusp of the global acceleration of the pandemic, a popular Twitter user called ZDoggMD (in real life, a doctor named Zubin Damania) launched a pro-vaccine rallying cry: #DoctorsSpeakUp. The hashtag, intended to call on real doctors to share the positive realities of immunization with the world, was instead almost immediately hijacked by anti-vaccines. Recently published research by a team at the University of Pittsburgh used supercomputers to understand how the event went wrong – and how similar efforts might be protected from such hijacking in the future.

Using Twitter’s filtered feed interface, the researchers extracted all publicly available tweets using the hashtag #DoctorsSpeakUp on March 5, 2020. Five percent of those tweets – around a thousand – were assessed using thematic content analysis, allowing researchers to study associations between tweets. sentiment, account type (likely human or robot) and content of the tweet (e.g. personal story, statement, etc.). Researchers used a tool called Botometer to assess the likelihood that a given account is likely to be a bot.

To perform this data intensive analysis, the researchers turned to local supercomputing resources at the Pittsburgh Supercomputing Center (PSC). There, they used the Bridges System for a while before it was retired in mid-February 2021, when they switched to the Bridges-2 System (which officially began production operations this spring).

“We worked with [the] Pittsburgh Supercomputing Center since before Bridges, has been running for the duration of Bridges, and we are now on Bridges-2, ”said Jason Colditz, University of Pittsburgh researcher and one of the authors of the article, in an interview with PSC Ken Chiacchia. Colditz noted that there are “terabytes and terabytes of data that we have collected on Twitter over the course of several years,” but the data has moved quickly, requiring stability and availability. “And that’s really where working with PSC has been beneficial,” he said.

Using the analyzes powered by a supercomputer, the researchers came up with valuable information: 78.9% of all tweets studied were anti-vaccination; 79.4 of tweets from users claiming to be healthcare professionals supported vaccination; and 96.3% of tweets from users claiming to be parents (but not healthcare professionals) were anti-vaccination. While bots made up only a small portion of tweets, tweets from anti-vaccination bots were five times as numerous as tweets from pro-vaccination bots. In addition, a higher percentage of anti-vaccination tweets linked to scientific information compared to pro-vaccination tweets, although the researchers noted that anti-vaccination tweets were likely to distort the research.

The researchers concluded that the hijacking was a “highly coordinated response of dedicated anti-vaccine antagonists.” Going forward, they noted, “it would be helpful to ensure that pro-vaccine messages consider hashtag use and pre-develop messages that can be initiated and promoted by pro-vaccine advocates. “.

“This is, I think, a really good time to look at social media to get a sense of what’s going on in these communications,” said Colditz, “and how we might, as public health advocates, being able to alleviate some of that… tough road we see with people hesitant or downright opposed to indulging in vaccinations for the current pandemic. ”

To read the document, which was published in the May 2021 issue of Vaccine, Click here.

To read the report on this research by Ken Chiacchia of PSC, click here.

Source link