Intel this week at the Computex conference disclosed more of the technical architecture details of a forthcoming family of Lunar Lake processors designed specifically for artificial intelligence (AI) workloads.

Leveraging a neural processing unit capable of processing 48 tera operations per second, (TOPS) in PCs will start to ship this Fall. Lunar Lake also adds a Xe2 graphical processor unit (GPU) that is 80% faster than its previous GPUs, along with an AI accelerator that promises to add an additional 67 TOPS of performance. Additionally, Lunar Lake chips will also feature 16GB or 32GB of on-board memory.

Despite all that additional processing horsepower, Intel says its approach to system-on-chip (SoC) architecture for client devices will be 40% more efficient than previous generations of processors.

In fact, Intel CEO Pat Gelsinger told conference attendees Lunar Lake is based on a revolutionary design that shows contrary to claims made by rivals that Moore’s Law is alive and well. “Every device will become an AI device,” he says.

Morgan Stanley expects AI PCs will account for 64% of the overall PC market by 2028, however the number of processor alternatives for building this new class of PCs continues to expand. As such, the level of competition Intel faces today in the PC market has never been stiffer at a time when Windows and MacOS operating systems can now run on multiple classes of processors.

Arm-based systems from Qualcomm along with rival offerings from AMD will continue to put pressure on Intel, notes Daniel Newman, CEO of the Futurum Group. “I tend to believe Intel has a good understanding of the challenge,” he says. “The various offerings from Intel and others will be the ultimate arbiter of how much Copilot+PC and AI PC result in a meaningful market shift.”

Intel is also betting on next-generation Intel Xeon 6 processors to gain shares in the data center as well. Intel expects Intel 6 processors to drive a three for one consolidation of servers by reducing overall latency and power consumption by 40% across eight cores that can be configured using revamped performance and efficient cores (P-cores and E-cores). The first of the Xeon 6 processors to be made available is Intel Xeon 6 E-core, code-named Sierra Forest. The Xeon 6 P-cores, code-named Granite Rapids, are expected to launch next quarter.

Those systems will play a crucial role in exposing 80% of the data that now resides in on-premises IT environments to large language models (LLMs) that have been trained using Gaudi 2 and Gaudi 3 AI accelerators, noted Gelsinger.

In general, Intel next year will reclaim the performance mantle it lost to rivals over the last few years of processor innovation, Gelsinger told conference attendees.

It’s not clear what processor architectures will ultimately prevail as the next generation of AI applications are developed, but as workloads become more data intensive the need for more efficient processors is rapidly becoming acute. The more costly it is to process the data required to train and deploy AI models, the longer it will take for AI applications to be pervasively deployed.