Qualcomm Entering AI and Data Centers

We have new developments in the battle for dominance in artificial intelligence!
This time, a new player is entering the game — and not just entering, but entering strongly.
And no, it’s not Nvidia or AMD …
It’s **Qualcomm **!

A company we’ve mostly known until now for its smartphone chips is now making a big move toward data centers and the cutting edge of today’s technology — artificial intelligence.
And the truth is, this move is neither hasty nor weak. It’s a well-designed, multi-layered strategy with a clear goal.


WHAT QUALCOMM ANNOUNCED

Last Monday, Qualcomm officially announced its entry into the AI accelerator space, unveiling the AI200 and AI250 chips, along with complete rack-scale data-center systems, fully liquid-cooled and designed to operate at hyperscaler level.

The AI200, expected in 2026, supports 768GB of LPDDR memory per card — more than what comparable products from Nvidia or AMD currently offer.
The AI250, expected in 2027, introduces an entirely new “near-memory computing” architecture, offering 10× higher memory bandwidth, lower energy consumption, and, according to Qualcomm , a significantly lower total cost of ownership (TCO).

All this might sound like technical jargon, but it’s big news.

And Qualcomm isn’t stopping there. It has already announced plans to release new AI chips every year starting in 2026, following a strategy similar to those of Nvidia and AMD, with a steady annual product-update cycle.

In addition, the company announced its first major partnership with the Saudi Arabian AI firm Humain, which will use Qualcomm’s chips to power 200-megawatt AI data centers — making Humain one of Qualcomm’s first large-scale AI customers.


QUALCOMM’S GOAL

Until now, Qualcomm has been known mainly for its mobile chips — about 75% of its semiconductor revenue came from smartphones.
But that market is now saturated, and Qualcomm is looking for new sources of growth.
At the same time, it faces rising risks from companies like Apple, which are developing in-house chips.

With the AI200 and AI250, Qualcomm ($QCOM) is clearly targeting the AI data-center market, focusing specifically on inference — the stage where AI models actually run — rather than training.
That’s a big deal, because inference represents the largest portion of real-world AI usage.

It’s a segment with massive potential, as AI adoption expands everywhere — from cloud services and automotive, to smart factories and consumer devices.
Qualcomm claims its chips offer superior energy efficiency, greater integration flexibility for custom infrastructures, and significantly lower operating costs for clients.


WHAT ANALYSTS ARE SAYING

Bank of America views Qualcomm’s ($QCOM) announcement positively, calling it a “meaningful diversification” at a time when smartphone growth is slowing.
It forecasts that the AI-accelerator market will reach $114 billion by 2030, and notes that companies like OpenAI are already seeking alternative suppliers beyond Nvidia.

However, it also points out that 2026 will be limited in scope, with only one confirmed partnership so far (Humain), and that Qualcomm will need to prove it can technically compete with the current giants.

The bank also highlights that Qualcomm’s stock remains undervalued, largely due to its dependence on low-growth markets and the risks tied to major clients like Samsung and Apple.
Even so, Bank of America maintains a “Buy” rating with a price target of $200, seeing significant upside potential in the coming months.

Posted Using INLEO



0
0
0.000
0 comments