Supercomputers – Fun With Justin http://funwithjustin.com/ Sat, 01 Oct 2022 08:47:07 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.3 https://funwithjustin.com/wp-content/uploads/2021/05/fun-with-justin-icon-150x150.png Supercomputers – Fun With Justin http://funwithjustin.com/ 32 32 The Pacific Ocean will make way for the next global supercontinent on the planet https://funwithjustin.com/the-pacific-ocean-will-make-way-for-the-next-global-supercontinent-on-the-planet/ Sat, 01 Oct 2022 07:21:53 +0000 https://funwithjustin.com/the-pacific-ocean-will-make-way-for-the-next-global-supercontinent-on-the-planet/

According to a new study from Curtin University, Earth’s next supercontinent: Amasia, will form when the Pacific Ocean closes in 200 to 300 million years. Scientists used a supercomputer to simulate the formation of supercontinents.

They discovered that since the Earth has been cooling for billions of years, the plates that support the oceans are thinning and weakening over time. This makes it more difficult for the next supercontinent to form by closing “young” oceans, such as the Atlantic or Indian oceans.

The Pacific Ocean is what remains of the super Panthalassa Ocean, which began to form 700 million years ago when the ancient supercontinent began to break up. Since the time of the dinosaurs, when it was the largest, this ocean, the oldest we have on Earth, has been gradually shrinking.

It is currently shrinking by a few centimeters per year, and its current size of around 10,000 kilometers is expected to take 200 to 300 million years to close.

Lead author Dr Chuan Huang, from Curtin’s Earth Dynamics Research Group and School of Earth and Planetary Sciences, said the new discoveries were significant and provided information about what would happen to Earth over the next 200 million years.

“Over the past 2 billion years, Earth’s continents have collided to form a supercontinent every 600 million years, known as the supercontinent cycle. This means that the current continents will come together again in a few hundred million years.

The new supercontinent was previously named Amasia because some believe the Pacific Ocean will close (as opposed to the Atlantic and Indian Oceans) when America collides with Asia. Australia is also expected to play a role in this critical Earth event, first colliding with Asia and then connecting America and Asia once the Pacific Ocean closes.

“By simulating the expected evolution of the Earth’s tectonic plates using a supercomputer, we were able to show that in less than 300 million years, it is probably the Pacific Ocean that will close, allowing the formation of Amasia, debunking some previous scientific theories.

Co-author John Curtin, Professor Emeritus Zheng-Xiang Li, also of Curtin’s School of Earth and Planetary Sciences, said that having the entire world dominated by a single landmass would drastically alter Earth’s ecosystem and environment.

“Earth as we know it will look dramatically different when Amasia forms. Sea levels are expected to be lower and the vast interior of the supercontinent will be very arid with high daily temperature ranges,” Professor Li said.

“Currently, the Earth consists of seven continents with very different ecosystems and human cultures, so it would be fascinating to think about what the world might look like in 200 to 300 million years.”

Journal reference:

  1. Chuan Huang et al, Will Earth’s Next Supercontinent Assemble Through the Closing of the Pacific Ocean?, National Science Review (2022). DOI: 10.1093/nsr/nwac205
]]>
How the US nuclear test moratorium started a supercomputing revolution https://funwithjustin.com/how-the-us-nuclear-test-moratorium-started-a-supercomputing-revolution/ Thu, 29 Sep 2022 12:01:40 +0000 https://funwithjustin.com/how-the-us-nuclear-test-moratorium-started-a-supercomputing-revolution/

Thirty years ago, on September 23, 1992, the United States conducted its 1,054th nuclear weapons test.

When this test, named Divider, exploded in the morning underground in the Nevada desert, no one knew it would be the last American test for at least the next three decades. But by 1992, the Soviet Union had formally dissolved and the United States government decreed what was then seen as a short-term moratorium on testing that continues today.

This moratorium came with an unexpected benefit: no longer testing nuclear weapons ushered in a revolution in high-performance computing that has far-reaching national and global security impacts that few are aware of. The need to maintain our nuclear weapons in the absence of testing has led to an unprecedented need for increased scientific computing power.

At Los Alamos National Laboratory in New Mexico, where the first atomic bomb was built, our primary mission is to maintain and verify the safety and reliability of the nuclear stockpile. To do this, we use non-nuclear and subcritical experiments coupled with advanced computer modeling and simulations to assess the health and extend the life of US nuclear weapons.

But as we all know, the geopolitical landscape has changed in recent years, and while nuclear threats still loom, a host of other emerging crises threaten our national security.

Pandemics, sea level rise and coastal erosion, natural disasters, cyberattacks, the spread of disinformation, energy shortages: we have seen firsthand how these events can destabilize nations, regions and the world. At Los Alamos, we use high-performance computing that has been developed over decades to simulate nuclear weapon explosions with extraordinarily high fidelity to address these threats.

When the Covid pandemic first took hold in 2020, our supercomputers were used to help predict the spread of the disease, as well as model vaccine deployment, the impact of variants and their spread, counties at high risk of vaccine hesitancy and the impacts of various vaccine distribution scenarios. They also helped model the impact of public health orders, such as face mask mandates, to stop or slow the spread.

This same computing power is used to better understand DNA and the human body at fundamental levels. Los Alamos researchers have created the largest simulation to date of an entire DNA gene, a feat that required the modeling of a billion atoms and will help researchers better understand and develop cures for diseases such as cancer.

What are Los Alamos supercomputers used for?

The Laboratory also uses the power of secure and classified supercomputers to examine the national security implications of climate change. For years, our climate models have been used to predict Earth’s responses to change with ever-increasing resolution and accuracy. But the usefulness of our climate models to the national security community has been limited. This is changing, given recent advances in modelling, increasing resolution and computing power, and combining climate models with infrastructure and impact models.

We can now use our computing power to observe climate change at extraordinarily high resolution in areas of interest. Because the work is done on secure computers, we don’t reveal to potential adversaries exactly where (and why) we are looking. Additionally, the use of these supercomputers allows us to incorporate classified data into the models which can further increase accuracy.

Los Alamos supercomputers are also used for earthquake prediction, coastal erosion impact assessment, wildfire modeling, and a host of other national security challenges. We also use supercomputers and data analytics to optimize our nonproliferation threat detection efforts.

Of course, our Laboratory is not alone in this effort. Other Department of Energy labs are using their intensive computing power to tackle similar and additional challenges. Likewise, private companies pushing the boundaries of computing are also helping to advance national security-focused computing efforts, much like the work of our nation’s top universities. As the saying goes, a rising tide lifts all boats.

And we have the moratorium on nuclear weapons testing, at least in part, to thank. We did not know 30 years ago how much we would benefit from the supercomputing revolution that followed. As a nation, itContinuing to invest in supercomputing not only ensures the safety and efficiency of our nuclear stockpile, but also advances scientific exploration and discovery that benefits everyone. Our national security depends on it.

Bob Webster is the assistant director of weapons at the Los Alamos National Laboratory. Nancy Jo Nicholas is Associate Laboratory Director for Global Security, also at Los Alamos.

Have an opinion?

This article is an Op-Ed and the opinions expressed are those of the author. If you would like to respond or would like to submit your own editorial, please email Cary O’Reilly, C4ISRNET Senior Editor.

]]>
DOD’s largest telescope atop Haleakalā in Maui receives mirror coating, preserves space domain awareness https://funwithjustin.com/dods-largest-telescope-atop-haleakala-in-maui-receives-mirror-coating-preserves-space-domain-awareness/ Tue, 27 Sep 2022 14:58:00 +0000 https://funwithjustin.com/dods-largest-telescope-atop-haleakala-in-maui-receives-mirror-coating-preserves-space-domain-awareness/

VIDEO | 06:45 | The Advanced Electro-Optical System (AEOS) telescope receives a new coating. (Courtesy photo/Boeing)

By Jeanne Dailey, Air Force Research Laboratory Public Affairs

The 3.6 meter, 75 ton Advanced Electro-Optical System, or AEOS. The telescope, pictured with the Mirror Cover Team, is the Department of Defense’s largest optical telescope. The mirror has received its second coat since AEOS was installed at the Air Force Maui Optical and Supercomputing site in 1997. AMOS is part of the Air Force Research Laboratory and maintaining the mirror in pristine condition is key to the space domain awareness mission of the US Space Force. . (Courtesy photo/Boeing)

The advanced electro-optical system at the Air Force Maui Optical and Supercomputing site, or AEOS, the Department of Defense’s largest telescope, measuring 3.6 meters or 11.9 feet, has had a facelift.

Located atop the 10,023-foot Haleakalā volcano, the telescope is part of a series of telescopes called the Maui Space Surveillance System, which the US Space Force uses for Space Domain Awareness, or SDA, recognizing space as a priority area for advancing national security.

The site combines a research and development mission under the Air Force Research Laboratory, a laboratory supporting two services, and an operational mission under the U.S. Space Force’s 15th Space Surveillance Squadron, a unit USSF Space Operations Command Activated May 2022.

After a year of planning and four months of execution, the site has completed the coating of the AEOS, the telescope’s main mirror. AEOS is a reflector telescope, indicating that it has a small secondary mirror placed near the focus of the primary mirror to reflect light through a central hole, increasing the magnification and sharpness of objects in the sky.

Workers remove and wash the main mirror of the Advanced Electro-Optical System at the Air Force Maui Optical and Supercomputing site, Maui, Hawaii, in preparation for its mirror coating. AMOS is part of the Air Force Research Laboratory, and the AEOS Telescope supports the US Space Force with the Nation’s Space Domain mission to operate freely in space. (Courtesy photo/Boeing)
THE ARTICLE CONTINUES UNDER THE AD
THE ARTICLE CONTINUES UNDER THE AD

Keeping the AEOS telescope’s main mirror in good condition is paramount to the site’s SDA mission, said Lt. Col. Phillip Wagenbach, who is both squadron commander and branch chief of the research and development mission of the AFRL Directed Energy Directorate.

The Advanced Electro-Optical System primary mirror cell, which contains the mirror substrate, moves from its telescope location at the Air Force Maui Optical and Supercomputing (AMOS) site, Maui, Hawaii, to the Mirror Coating Facility of the unit, where it will undergo a mirror coating. AMOS is part of the Air Force Research Laboratory, and the AEOS Telescope supports the US Space Force with the Nation’s Space Domain mission to operate freely in space. (Courtesy photo/Boeing)

“I am honored to lead these two critical functions that preserve our access and freedom to operate in space,” Wagenbach said. “Periodic coating of the AEOS main mirror ensures the telescope is ready to support the warfighter’s SDA mission. There’s never really a good time to decommission the telescope, but it’s best to plan for recovery as a periodic maintenance effort rather than having to shut down the telescope due to catastrophic mission degradation.

The first mirror coating took place in late 2008, approximately 12 years after the original coating was applied in 1997. The long duration between the original coating and the first coating was mainly due to the construction of the coating facility mirror, completed in 2008.

“Large mirrors like the 3.6-meter AEOS need to be recoated every 4-6 years, but performance requirements are highly dependent on the telescope’s mission,” said Scott Hunt, site technical director. “For our SDA mission, we are challenged to detect dark objects at night and to image satellites during the day. Typically, our SDA objects are brighter than astronomical objects, so we can push coatings longer than astronomers.”

Hunt said scientists and engineers track reflectivity degradation and mirror scattering over time. They weigh that against the risk of overlap, with the telescope out of service, and the mission’s performance for daytime imaging and dark object detection.

THE ARTICLE CONTINUES UNDER THE AD

“The bare aluminum coating of the AEOS primary mirror degrades over time,” Hunt said. “When the coating is first applied, it is about 1,000 angstroms thick, or about 1/7 the width of a human hair. Imperfections in the original coating increase scatter and decrease reflectivity and may accelerate degradation.These imperfections include smudges, pinholes, and spatter created by dust and contaminants on the mirror substrate or drips of aluminum at the time of coating.

Removing the primary mirror cell from the telescope and moving it to the mirror capping facility is a delicate and time-consuming process that ends in a quick mirror capping.

“Once the mirror cell is transferred from the telescope on the fourth floor of the AEOS building to the mirror coating facility on the first floor, it takes approximately two weeks to remove the mirror substrate from the cell, remove the old coating and prepare the coating chamber,” Hunt said. “Once the mirror is in the chamber, the reflective coating is applied by vacuum deposition with aluminum coated tungsten filaments over a period of 15 to 20 minutes. When the aluminum on the filaments begins to vaporize, the actual coating process takes less than a minute.

The Boeing team poses with the Advanced Electro-Optical System, or AEOS, primary mirror following an overlay at the Air Force Maui Optical and Supercomputing (AMOS) site, Maui, Hawaii. This was the second coating of the mirror since AEOS was originally installed in 1997. AMOS is part of the Air Force Research Laboratory and the 3.6 meter AEOS Telescope supports the space domain reconnaissance mission of the US Space Force. (Courtesy photo/Boeing)

Boeing personnel on site carried out the recovery with support from government management, the facilities contractor and outside experts. During the process, the team encountered a few challenges and even a surprise.

“Probably the biggest challenge was keeping the mirror coating facility clean and ensuring little or no contamination on the substrate before sealing in the vacuum chamber,” Hunt said. “The stripping and cleaning of the substrate was a critical process, especially the final wipe to remove any residue of chemicals used during the stripping and cleaning process.”

THE ARTICLE CONTINUES UNDER THE AD

Hunt said that through the use of a hepa filter and a clean room plastic shroud around the vacuum chamber, they were able to maintain a much lower particle count in the chamber bell.

“We also fabricated a ‘drumhead’ cover that was placed over the mirror substrate immediately after the cleaning process,” Hunt said. “The drumhead lid proved to be an effective innovation in mitigating particle buildup on the substrate while we were making final preparations in the chamber.”

During the process, insects startled the team several times.

“An excited butterfly floating on our clean substrate inside the chamber would be catastrophic for the coating process,” Hunt explained. “We were able to extract with the cleanroom vacuum without experiencing any adverse effects on the new coating.”

To validate the coating process, the Maui team sent the results to private industry coating experts in Albuquerque, New Mexico. and Tucson, Arizona.

“The report we got was that the coating we got on this overlay was that ‘the results were excellent and among the best they’ve ever seen on a large mirror of this type,'” Wagenbach said.

]]>
New supercomputer for climate research inaugurated in Hamburg https://funwithjustin.com/new-supercomputer-for-climate-research-inaugurated-in-hamburg/ Sat, 24 Sep 2022 16:34:42 +0000 https://funwithjustin.com/new-supercomputer-for-climate-research-inaugurated-in-hamburg/

A new supercomputer called Levante has been inaugurated at the German Climate Computing Center. It is said to allow incredibly detailed climate simulations and enables 14 quadrillion mathematical operations per second (14 petaflops).

Levante: a supercomputer goes into service in Hamburg

On September 22, 2022, the new Levante supercomputer began operations at the German Climate Computing Center (DKRZ). It is provided by Atos and is used for climate research. Due to its enormous computing power, it should enable detailed climate simulations that have not been feasible until now.

Levante is made up of 2,832 tightly networked computers, each with two processors, which together have a performance of 14 petaflops and can perform 14 quadrillion mathematical operations per second. An impressive figure, but not quite up to the Mare Nostrum 5 with 314 PFLOPS.

Each AMD EPYC processor has 64 processor cores, giving the supercomputer a total of over 362,000 processing cores. The main memory comprises more than 800 terabytes, which are divided into memory sizes between 256 GB and 1024 GB. In addition to the CPU partition with conventional computers, Levante has a partition with 60 GPU nodes which provide power additional peak computing power of 2.8 petaflops.

Data transfer between computers is extremely fast thanks to NVIDIA Mellanox HDR 200G. Up to 200 Gbit/s are possible. The data is stored on a storage system with a total capacity of 132 petabytes, which is provided by DDN.

In November 2017, an agreement between the Helmholtz Association, the Max Planck Society and the Free and Hanseatic City of Hamburg approved financial resources for Levante. A total amount of 45 million euros has been agreed.

High resolution climate models

“The new Levante supercomputer at DKRZ will enable even more complete, higher resolution and therefore better climate projections in the future,” says Federal Research Minister Bettina Stark-Watzinger of the FDP.

They must provide even more complete and detailed information on the effects of climate change than was possible before. The Levante supercomputer also opens up new possibilities in terms of energy efficiency.

According to Professor Dr. Thomas Ludwig, Managing Director of DKRZ, the waste heat will be used to heat the laboratories in the nearby university building. In doing so, the supercomputer allows particularly fine climate simulations (with a mesh width of only 1 km).

“Although only for a few hours, it has never been possible for anyone else before. It has only been possible because we now have Levante”, adds Professor Jochem Marotzke, director of the Max Planck Institute for Meteorology.

]]>
Quantum computing is a revolution – GuruFocus.com https://funwithjustin.com/quantum-computing-is-a-revolution-gurufocus-com/ Thu, 22 Sep 2022 21:24:45 +0000 https://funwithjustin.com/quantum-computing-is-a-revolution-gurufocus-com/

Quantum Computing Inc. (QUBT, Financial) stands out as an excellent potential investment in the disruptive but nascent quantum computing industry.

Solve complex problems

According to TechTarget, quantum computing theory explains the nature and behavior of energy and matter at the atomic and subatomic levels. Current computing is based on algebraic principles. Quantum supercomputers compute qubits, resulting in a massive increase in storage and speed. They solve complex problems that conventional computers struggle to solve by running complex simulations of complex systems.

In their early days, FutureLearn.com notes that quantum computers are difficult and expensive to build. Computers are error prone and there are hardware development and stability issues. They require massive investments in cooling costs and basic infrastructure. Moreover, specialists are needed to design, build and operate these computers.

Needless to say, these companies need a lot of capital to stay in business. In recent years, the pace of investment has accelerated among large companies.

An example is Microsoft Corp. (MSFT, Financial), whose Azure Quantum service builds and runs quantum algorithms for multiple platforms at once. Alphabet Inc. (GOOG, Financial)(GOOGL, Financial) Google Quantum AI develops software and hardware. Google researchers claimed three years ago to have achieved quantum supremacy when their Sycamore quantum supercomputer ran in 200 seconds; the exercise was an abstruse calculation that they claimed would immobilize a supercomputer for 10,000 years. In addition, International Business Machines Corp. (IBM, Financial) released its Q System One two years ago, while Nvidia Corp. (NVDA, Financial) is partnering with other companies on the next generation of computers. In August 2020, Amazon.com Inc. (AMZN, Financial) launched Amazon Braket, which is a cloud-based quantum computing service. By 2023, Fujitsu Ltd. (EAST: 6702, Financial) will be the first company to provide research derived from quantum computers.

According to Matthew Humphries of PC Magazine, “the hope is that quantum computing can have a major positive impact in a range of fields, including chemicals, pharmaceuticals, automotive and finance.”

New to the game

Founded in 2001, Leesburg, Virginia-based Quantum Computing is a newer player in the space, as it was originally created to sell inkjet cartridges online. According to The Quantum Insider, he then became a beverage distributor in 2007.

In 2018, the company announced its decision to focus on quantum computing and revealed its new name.

Quantum computing is now in the early stages of commercializing software tools and applications that harness new quantum computing capabilities for businesses, rather than focusing on hardware. Its Qatalyst program is a quantum accelerator that allows developers to build and run quantum-ready applications on conventional computers to run on quantum computers.

The company made its first acquisition in June. The addition of QPhoton expands its ability to develop photonic-based quantum systems. Knowledge gained from QPhoton complements Quantum’s entropy quantum computer, Dirac. The main advantage is that Dirac deploys at room temperature as a rack-mountable server without special infrastructure, so no underground cooling environment is required.

In an August letter to shareholders, Quantum Computing CEO Robert Liscouski said he believed the company offered a real-world problem-solving solution that gave him three to five years of business. ahead of the competition.

“Quantum information processing to solve real-world problems is now available and this is just the beginning,” he said. “World-class talent will be at the center of QCI’s transformation into a true force in the quantum information industry.”

Initial numbers

At the end of 2021, Quantum Computing reported total assets of $17.3 million. Six months later, the company reported total assets of $91.9 million, but that included fixed assets net of amortization, security deposits, intangible assets of $25 million and goodwill of nearly $60 million.

The market cap of the company is $84.76 million.

Total revenue for the six months ended June 30 was $96,000. Total operating expenses amounted to $11.5 million, leaving a cumulative net loss of $12.23 million. The loss per share was 42 cents. The company has no debt.

Based on the GF score of 40 out of 100, the company currently has low performance potential.

It received a high financial strength rating of 8 out of 10 and a moderate strength rating of 6 out of 10. The financial strength rating is arguably a little optimistic since the company has a cash trail of less than one year. This could dilute shareholders by issuing more shares to raise funds.

Quantum’s stock is more volatile than the market as its 52-week high was near the initial public offering price, hitting $8.90. The stock fell to a low of $1.42 in May in anticipation of the acquisition.

1572966397393354752.png

On September 20, Quantum Computing announced the launch of the Dirac1 subscription service, initiating the monetization of its research and development. The commercially available service provides web-based access to programs capable of solving problems with up to 5,000 variables on an hour-per-month basis to a fully dedicated system.

In October 2021, institutional ownership made its appearance. Currently, Quantum Computing has 33.9 million shares outstanding. About 5.4%, or 1.83 million shares, are held by institutions. Nearly 14% is held by insiders and 74.33%, or approximately 25 million shares, are free float.

Positive calculation

The large percentage of business ownership is a positive sign. The increase in institutional ownership suggests that confidence is growing in small business. At the end of June, Quantum Computing joined the Russell Microcap Index. Liscouski called the event a significant milestone. The stock price has fallen more than 60% in the past year, but the short interest is less than 2%.

The quantum computing industry is expected to expand at a compound annual growth rate of 35.2% by 2028. This will undoubtedly drive the growth of Quantum Computing revenue. The company could also benefit from government cash injections.

The U.S. Department of Energy has allocated $166 million over five years to help build quantum computing capacity and $500 million over the same period to build large-scale quantum computing network infrastructure nationwide. under the CHIPS and Science Act. The US National Quantum Initiative has injected more than $1 billion in other federal funds.

I predict that companies will increasingly turn to quantum computing, as the company offers software that can run on currently available hardware; return on equity will double over the next 12 months and, with proper marketing, the stock has the potential to exceed $3.25 per share.

Conclusion

Quantum computing may seem like a niche industry in the early days of research and development, but it is attracting the attention of big corporations, investors, and government officials. History suggests that companies with deep pockets are looking to buy early assets rather than fully commit to building them. The massive influx of money into the rapidly changing industry makes Quantum Computing’s position in the market a great long-term investment.

]]>
UH President Named to NSF Advisory Board for Cyberinfrastructure https://funwithjustin.com/uh-president-named-to-nsf-advisory-board-for-cyberinfrastructure/ Tue, 20 Sep 2022 23:14:52 +0000 https://funwithjustin.com/uh-president-named-to-nsf-advisory-board-for-cyberinfrastructure/

University of Hawaii President David Lasner was appointed to the National Science Foundation (NSF) Cyberinfrastructure Advisory Committee.

David Lasner

The NSF Cyberinfrastructure Advisory Committee (ACCI) advises NSF on the agency’s plans and strategies for developing and supporting state-of-the-art cyberinfrastructure that enables advances in all areas of science and engineering. Lassner joins 16 other committee members from higher education institutions and federal research labs.

The ACCI is advisory to all NSF and works closely with its Office of Advanced Cyberinfrastructure (CAD). In NSF, CAD supports related cyberinfrastructure resources, tools and services such as supercomputers, high-capacity mass storage systems, system software suites and scalable interactive programming environments and visualization tools, all linked by university, regional, national and international broadband networks. CAD also supports the preparation and training of current and future generations of researchers and educators to use cyberinfrastructure to pursue their research and education goals.

Earlier this year, Lassner was also invited to join the Board of Directors of Internet2, the American higher education community, research institutes, government entities, businesses and cultural organizations working together to provide a secure broadband network, cloud solutions, research support and services all tailored to support research and education. The Internet2 board is structured with a mix of university presidents, chief information officers, network operators and others. In 2010, Lassner received the Richard Rose Award from Internet2, which recognizes extraordinary individual contributions to extending the reach of research universities’ advanced networks to the broader educational community.

“I am honored to be a part of these two organizations, both of which advance research and education through the application of modern computing, information and networking technologies,” said Lassner. “It’s too easy for Hawaii to be overlooked in national agendas given our remoteness and separation from the continental United States uh can make substantial contributions through the work of our own cyberinfrastructure researchers and innovators, and we have a particularly important role in connecting U.S. operations to the Pacific and Asia.

]]>
Scientists are accelerating the search for a key ingredient in next-generation lithium batteries https://funwithjustin.com/scientists-are-accelerating-the-search-for-a-key-ingredient-in-next-generation-lithium-batteries/ Mon, 19 Sep 2022 02:38:00 +0000 https://funwithjustin.com/scientists-are-accelerating-the-search-for-a-key-ingredient-in-next-generation-lithium-batteries/

“The team can identify specific features or molecules that might work better and recommend them to experimental groups to test in the lab,” Osborne said.

“We are able to calculate how changing the chemical structure of electrolyte molecules in a battery can ultimately increase the effective capacity of lithium batteries.”

Osborne said the team can provide key insights to reduce research time and costs, beneficial for advancing battery technologies in shorter timeframes.

“The computational approach we developed greatly speeds up the screening process, which would traditionally be cost and time prohibitive if each candidate molecule had to be synthesized experimentally and tested in the laboratory,” he said.

Next steps

The team is studying other modifications of electrolyte molecules that could expand their electrochemical stability, pushing the limits of battery storage capacity.

“We are also investigating modifications to lithium-air battery chemistry, which are still in the development phase, but could be even lighter and suitable for advanced applications such as electric flight,” Spencer said.

The researchers use supercomputing facilities at the National Computational Infrastructure (NCI) Facility in Canberra and the Pawsey Supercomputing Center in Western Australia.

“We also use RMIT’s RACE Hub to analyze our data and produce high-resolution animations that help us interpret our data and communicate our research findings,” Spencer said.

RMIT is collaborating with Amazon Web Services (AWS) and AARNet to be the first Australian university to implement a dedicated commercial cloud supercomputing facility. Known as the RACE Hub, it enables true scalable and elastic high-power computing to support digital innovation.

“Towards greater electrochemical stability of electrolytes: lithium salt design by in silico screening” is published in the Journal of Materials Chemistry A (DOI: 10.1039/D2TA01259F).

]]>
Prevent today’s encrypted data from becoming tomorrow’s treasure https://funwithjustin.com/prevent-todays-encrypted-data-from-becoming-tomorrows-treasure/ Fri, 16 Sep 2022 18:56:25 +0000 https://funwithjustin.com/prevent-todays-encrypted-data-from-becoming-tomorrows-treasure/

You may think that encrypting data with today’s technology will provide robust protection. Even in the event of a data breach, you can assume that the information is secure. But if your organization works with “long-tail” data — meaning its value lasts for years — you’re wrong.

Fast forward five to 10 years from now. Quantum computers – which use quantum mechanics to perform operations millions of times faster than today’s supercomputers – will arrive and can crack the current cipher in minutes. At this point, nation-state actors only need to upload the encrypted data they have been collecting for years into a quantum computer, and within minutes they will be able to access any part of the data. stolen in the clear. This type of “harvest now, decrypt later” (HNDL) attack is one of the reasons adversaries are now targeting encrypted data. They know they can’t decipher the data today, but they can tomorrow.

Even though the threat of quantum computing is a few years away, the risk exists today. It is for this reason that US President Joe Biden signed a national security memorandum demanding that federal agencies, defense, critical infrastructure, financial systems and supply chains develop plans to adopt resilient encryption. quantum. President Biden setting the tone for federal agencies is an apt metaphor – quantum risk needs to be discussed and risk mitigation plans made at the executive level (CEO and board).

Take a long-term view

Data from research analysts suggests that the typical CISO spends two to three years at a company. This leads to potential misalignment with risk likely to materialize in five to 10 years. And yet, as we see with government agencies and a host of other organizations, the data you generate today can provide opponents with tremendous value in the future once they can access it. This existential problem will probably not be tackled only by the person in charge of security. It needs to be addressed at the highest level of business leaders because of its critical nature.

For this reason, savvy CISOs, CEOs, and boards should address quantum computing risks together, now. Once the decision to adopt quantum-resistant encryption is made, the questions invariably become, “Where to start and how much will it cost?”

The good news is that it doesn’t have to be a painful or expensive process. In fact, existing quantum resilient encryption solutions can run on existing cybersecurity infrastructure. But this is a transformational journey – the learning curve, internal strategy and project planning decisions, technology validation and planning and implementation take time – so it is imperative that business leaders are starting to prepare today.

Focus on randomization and key management

The road to quantum resilience requires the engagement of key stakeholders, but it is practical and generally does not require mining and replacing existing cryptographic infrastructure. One of the first steps is to understand where all your critical data resides, who has access to it, and what safeguards are currently in place. Next, it is important to identify which data is the most sensitive and how long it lasts. Once you have these data points, you can develop a plan to prioritize migrating data sets to quantum resilient encryption.

Organizations need to consider two key points when considering quantum resilient encryption: the quality of the random numbers used to encrypt and decrypt the data and the key distribution. One of the vectors that quantum computers could use to crack current encryption standards is to exploit encryption/decryption keys derived from numbers that are not truly random. Quantum strong cryptography uses encryption keys that are longer and, most importantly, based on truly random numbers so they cannot be hacked.

Second, the typical enterprise has multiple encryption technologies and key distribution products, and management is complex. Therefore, to reduce key dependency, often only large files are encrypted or, even worse, lost keys leave lots of data inaccessible. It is imperative that organizations deploy enterprise-wide, high-availability encryption key management to enable encryption of a virtually unlimited number of smaller files and records. The result is a significantly more secure business.

Quantum-resistant encryption is no longer a “nice to have”. With each passing day, the risk increases as the encrypted data is stolen for future cracking. Fortunately, unlike quantum computing, it does not require huge investments and the resulting risk reduction is almost immediate. Getting started is the hardest part.

]]>
Revealed: FiveThirtyEight ‘supercomputer’ predictions for Carlisle United this season https://funwithjustin.com/revealed-fivethirtyeight-supercomputer-predictions-for-carlisle-united-this-season/ Sun, 11 Sep 2022 16:00:00 +0000 https://funwithjustin.com/revealed-fivethirtyeight-supercomputer-predictions-for-carlisle-united-this-season/ Carlisle United will have to defy the stats pundits if they are to have a memorable season in League Two.

The Blues are expected to finish in a modest 19th position by the top scoring players.

This is their placement predicted by the calculations of the “supercomputer” of statistical analysts FiveThirtyEight.

They released their latest updated predictions for the fourth tier after the last round of matches.

And while Paul Simpson’s Blues currently sit in 12th place after a strong start, data modelers predict a fall in the division.

READ MORE: How football and Carlisle United played after the death of King George VI in 1952

They have Carlisle finishing with 54 points – just one more than last season – and just one place higher in the table than in 2021/22.

They should finish with a goal difference of -13.

United’s predictions come despite Simpson leading the Blues to two wins, three draws and one defeat in their first six matches.

If the Blues had beaten Grimsby Town in the game which ended up being abandoned, they would currently be seventh or eighth.

FiveThirtyEight have Salford City top of the table as champions, ahead of Leyton Orient and Mansfield Town.

The play-off spots are taken by Doncaster Rovers, Northampton Town, Stevenage and Swindon Town.

The bottom two spots, meanwhile, see Rochdale and Hartlepool United tipping for relegation.

Carlisle has a less than one percent chance of winning the title and a three percent chance of automatic promotion.

Their play-off chances are rated at 7%, while the risk of relegation is 14%.

FiveThirtyEight says its predictions are based on a revised version of ESPN’s Soccer Power Index, a scoring system devised by Nate Silver in 2009.

Ratings are said to be the best estimate of a team’s overall strength, including offensive and defensive ratings, which reflect a team’s abilities against an average opponent on neutral ground.

United’s compatriots Cumbrians Barrow are expected to finish 11th by data pundits.

This is despite an impressive start from Pete Wild’s side, who sit third after five wins and two losses in their first seven Ligue 2 outings this campaign.

Carlisle currently fluctuates between 40/1 and 50/1 with the bookmakers for the title, 10/1 for promotion, 15/2 to make the play-offs and 8/1 for relegation.

The Blues are due to return to action at Mansfield Town on Tuesday night, after all matches on Saturday were postponed following the death of Queen Elizabeth II.

For more details on FiveThirtyEight predictions, click HERE

]]>
Is overeating to blame for the bulges in the Milky Way bar? –ScienceDaily https://funwithjustin.com/is-overeating-to-blame-for-the-bulges-in-the-milky-way-bar-sciencedaily/ Fri, 09 Sep 2022 17:47:32 +0000 https://funwithjustin.com/is-overeating-to-blame-for-the-bulges-in-the-milky-way-bar-sciencedaily/ A new simulation run on the world’s most powerful supercomputer dedicated to astronomy has produced a testable scenario to explain the appearance of the bar of the Milky Way. Comparing this scenario with data from current and future space telescopes will help clarify the evolution of our home galaxy.

Astronomy is revealing the structure of the Milky Way galaxy we live in in ever-increasing detail. We know it to be a disk galaxy, with two- or four-armed spirals, with a straight bar in the middle connecting the spirals. Now we also know that the inner part of the bar has a “peanut-shaped bulge”, places where the bar is thicker, protruding above and below the midplane of the Milky Way and a “nuclear bulge “, which is discy and located in the central part of the Milky Way. Some other galaxies, but not all, show similar bulges of two types.

Like dieters who suddenly discover bulges, astronomers have asked the question, “How did bulges of two types form?” To answer this question, a team led by Junichi Baba at the National Astronomical Observatory of Japan (NAOJ) simulated a possible scenario for a Milky Way-like galaxy on “ATERUI II” at NAOJ, the world’s most powerful supercomputer. world dedicated to astronomy. The team’s simulation is the most comprehensive and accurate to date, including not only the stars in the galaxy, but the gas as well. It also incorporates the birth of new stars from gas and the death of stars as supernovae.

The formation of a bar helps channel the gas to the central part of the galaxy, where it triggers the formation of new stars. So it might be reasonable to assume that the nuclear bugle in the galaxy is created from new stars born there. But the simulations show that there are almost no new stars in the bar outside of the nuclear bulge, because the bar is so good at funneling gas to the center. This means gas scraping is not the reason a peanut-shaped bulge develops in the bar. Instead, the team finds that gravitational interactions can pull some of the stars into orbits that take them above and below the midplane.

The most exciting part is that the simulation provides a testable scenario. Because the peanut-shaped bulge acquires no new stars, all of its stars must predate the formation of the bar. At the same time, the bar channels gas to the central region where it creates many new stars. Thus, almost all of the stars in the nuclear bulge will be born after the bar has formed. This means that stars in the peanut-shaped bulge will be older than stars in the nuclear bulge, with a clear break between ages. This break corresponds to the moment when the bar was formed.

Data from the European Space Agency’s Gaia probe and the future Japanese JASMINE satellite will help determine the movements and age of stars and test this scenario. If astronomers can detect a difference between the ages of stars in the peanut-shaped and nuclear bulges, it will not only prove that overeating is not to blame for the peanut-shaped bulge, it will tell us the age of the bar in the Milky Way Galaxy.

Video: https://youtu.be/Shucn3HIlow

Source of the story:

Material provided by National Institutes of Natural Sciences. Note: Content may be edited for style and length.

]]>