Intel is committed to accelerating known AI

Intel is committed to accelerating known AI
Intel is committed to accelerating known AI

Intel has presented in Computex 2024 cutting-edge technologies and platforms for accelerate AIthese technologies are poised to dramatically accelerate the AI ​​ecosystem, from the data center, cloud and network to the edge and PC.

With more processing power, cutting-edge energy efficiency and low total cost of ownership (TCO), customers can now take advantage of the opportunity of a complete and significantly faster AI system.

“Artificial intelligence is driving one of the most momentous eras of innovation the industry has ever seen,” said Intel CEO Pat Gelsinger. “The magic of silicon is once again enabling exponential advances in computing that will push the limits of human potential and drive the global economy for years to come.”

“Intel is one of the only companies in the world that is innovating across the full spectrum of AI market opportunities, from semiconductor manufacturing to PC systems, networking, edge and data centers. “Our latest Xeon, Gaudi and Core Ultra platforms, combined with the power of our hardware and software ecosystem, are providing the flexible, secure, sustainable and cost-effective solutions our customers need to maximize the immense opportunities ahead of them,” he added. Gelsinger.

Accelerate AI: Intel makes AI possible everywhere

During his speech at Computex, Gelsinger highlighted the advantages of open standards and the powerful ecosystem that contributes to accelerating the opportunity for accelerate AI.

He was joined by leading industry figures and companies to express their support, including Jason Chen, President and CEO of Acer, Jonney Shih, President of ASUS, Satya Nadella, CEO of Microsoft, Jack Tsai, President of Inventec, and others. ecosystem leaders.

Gelsinger and others made it clear that Intel is revolutionizing AI innovation and delivering next-generation technologies ahead of schedule.

In just six months, the company has gone from launching 5th generation Intel Xeon processors to introducing the initial member of the Intel Xeon 6 family; to present for the first time the Gaudi AI accelerators to offer enterprise customers a cost-effective, high-performance Generative AI (GenAI) training and inference system; and from ushering in the era of AI PCs with Intel Core Ultra processors in more than 8 million devices to introducing the next client architecture, scheduled for release later this year. With these advancements, Intel accelerates execution while pushing the boundaries of innovation and production speed to democratize AI and catalyze industries to accelerate AI.

Data Center Modernization for AI: Intel Xeon 6 Improves Performance and Energy Efficiency

As digital transformations accelerate, enterprises face increasing pressures to renew aging data center systems to save costs, achieve sustainability goals, maximize physical floor and rack space, and create entirely new digital capabilities across the enterprise.

The high-performance computing to scalable cloud-native applications. Both the E and P cores are based on an architecture supported by a shared software platform and an open ecosystem of hardware and software vendors.

The first of the Xeon 6 processors to debut is the Intel Xeon 6 E-core (codenamed Sierra Forest), which is available today. The Xeon 6 P cores (codenamed Granite Rapids) are expected to launch next quarter.

With high core density and exceptional performance per watt, Intel Xeon 6 E-core delivers efficient computing with significantly lower power costs. Improved performance with greater power efficiency is ideal for the most demanding high-density and scalability workloads, including cloud-native applications and content delivery networks, network microservices, and digital services. consumption.

Additionally, up to 2.6x compared to 2nd Gen Intel® Xeon® processors in media transcoding workloads. By using less power and rack space, Xeon 6 processors free up computing capacity and infrastructure for innovative new AI projects.

High-performance GenAI at significantly lower total cost with Intel Gaudi AI accelerators

Today, harnessing the power of generative AI is faster and less expensive. As the dominant infrastructure option, x86 works at scale in nearly all data center environments, serving as the foundation for integrating the power of AI while ensuring cost-effective interoperability and the enormous benefits of an open ecosystem of developers and customers. , seeking, in this way, to accelerate AI with Intel.

Intel Xeon processors are the ideal CPU head node for AI workloads and work in a single system with Intel Gaudi AI accelerators, designed specifically for AI workloads. Together, they offer a powerful solution that integrates seamlessly into existing infrastructure.

As a machine learning alternative to the Nvidia H100 for large language model (LLM) training and inference, the Gaudi architecture gives customers the GenAI performance they seek with a price-performance advantage that provides choice and time to deployment. fast at a lower total cost of ownership.

A standard AI kit including eight Intel Gaudi 2 accelerators with a universal backplane (UBB), offered to system vendors for $65,000, is estimated to cost one-third of what comparable competing platforms cost. A kit including eight Intel Gaudi 3 accelerators with a UBB will sell for $125,000, which is estimated to be two-thirds the cost of comparable competing platforms..

Intel Gaudi 3 accelerators will provide significant performance improvements for training and inference tasks on leading GenAI models, helping businesses unlock the value of their proprietary data. Intel Gaudi 3 on a cluster of 8,192 accelerators is expected to deliver up to 40% faster training time5 compared to Nvidia H100 GPU cluster of equivalent size and up to 15% faster training performance6 for a cluster of 64 accelerators compared to Nvidia H100 in the model called Llama2-70B. Additionally, Intel Gaudi 3 is expected to deliver up to two times faster average inference7 versus Nvidia H100, running popular LLMs such as Llama-70B and Mistral-7B.

To make these AI systems widely available, Intel is collaborating with at least 10 of the world’s leading system vendors, including six new vendors that have announced they will bring Intel Gaudi 3 to market. Today’s new collaborators are Asus, Foxconn, Gigabyte, Inventec, Quanta and Wistron, expanding the production offering of leading system suppliers Dell, Hewlett-Packard Enterprise, Lenovo and Supermicro.

Accelerate AI on the device for laptop PCs; The new architecture offers triple AI calculation and incredible energy efficiency.

Beyond the data center, Intel is expanding its AI footprint at the edge and on the PC. With more than 90,000 Edge deployments and 200 million CPUs delivered to the ecosystem, Intel has given businesses choice for decades.

Today, the AI ​​PC category is transforming aspects of the computing experience, and Intel is at the forefront of this category-building moment. It’s no longer just about faster processing speeds or sleeker designs, but about creating Edge devices that learn and evolve in real time, anticipating user needs, adapting to their preferences, and heralding a whole new era of productivity, efficiency. and creativity.

With AI PCs predicted to account for 80% of the PC market by 2028, Intel has acted quickly to create the best hardware and software platform for the AI ​​PC. With over 100 independent software vendors (ISVs), 300 functions and compatibility with 500 models of AI across its Core Ultra platform.

Quickly building on these unprecedented advantages, the company today unveiled the architectural details of Lunar Lake, the flagship processor of the next generation of AI-enabled PCs. With a huge jump in graphics and AI processing power, and a focus on low-power computing performance for the thin and light segment, Lunar Lake will offer up to 40% less SoC power8 and more than three times the processing of AI. It is expected to go on sale in the third quarter of 2024, in time for the holiday shopping season.

While others prepare to enter the AI ​​PC market, Intel is already shipping at scale, supplying more AI PC processors through the first quarter of 2024 than all of its competitors combined. Lunar Lake will power more than 80 different AI PC designs from 20 original equipment manufacturers (OEMs). Intel expects to deploy more than 40 million Core Ultra processors to the market this year.

I like this:

I like Charging…

 
For Latest Updates Follow us on Google News
 

-

PREV 79 paid apps and games for Android that are free or on sale for a limited time
NEXT The advertising failure of the manzanita