These are the 10 hottest AI hardware companies to follow in 2025
Watch out Nvidia, these startups are looking to dent your dominance
Nvidia has become the undisputed leader in AI hardware, thanks to its powerful GPUs, and its market value soared in 2024, briefly making it the world's most valuable company, surpassing even Apple and Microsoft, and as of December 2024, its market capitalization stands at approximately $3.3 trillion.
But despite this apparent dominance, Nvidia faces strong competition when it comes to AI. AMD, its main rival, continues to challenge, and Intel and Broadcom are also investing heavily to capture a larger share of this rapidly growing industry.
In addition to these long established players, several startups are making significant strides in AI hardware, including the likes of Tenstorrent, Groq, Celestial AI, Enfabrica, SambaNova, and Hailo - so we've rounded up the top 10 startups we believe can not only challenge the dominance of established companies, but are also pushing the limits of what AI hardware can achieve, positioning them as key innovators to watch in 2025.
Tenstorrent
Tenstorrent, led by CEO Jim Keller - best known for his work on AMD’s Zen architecture and Tesla’s original self-driving chip - focuses on developing AI processors designed to boost performance and efficiency in training and inference workloads.
By combining its Tensix cores with open-source software stacks, Tenstorrent offers an alternative to the costly HBM favored by competitors like Nvidia. The company also licenses AI and RISC-V intellectual property for customers seeking customizable silicon solutions. We’ve written about its Grayskull (entry-level AI hardware for developers) and Wormhole (networked AI) products before.
Tenstorrent is an obvious choice as one of our top 10 AI companies to watch in 2025 as it recently closed a $700 million Series D funding round led by Samsung Securities with additional investments from LG Electronics, Fidelity, and Bezos Expeditions - the venture capital firm launched by former Amazon CEO Jeff Bezos - valuing the startup at $2.6 billion.
Mythic
Mythic is a startup specializing in analog AI chips that deliver power-efficient solutions for AI inference, particularly in edge applications such as IoT, robotics, and consumer devices. Despite reportedly running out of capital in 2022, Mythic rebounded with well-timed additional funding. Its next-generation processor features a new software toolkit and streamlined architecture, making analog computing more accessible.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Headquartered in Austin, Texas, and Silicon Valley, and led by Dr. Taner Ozcelik, former VP & GM of Nvidia’s automotive division, Mythic’s analog compute-in-memory technology reportedly enables chips that are 10x more affordable, consume 3.8x less power, and perform 2.6x faster compared to industry-standard AI inference digital CPUs.
Its Analog Matrix Processors (AMPs) integrate flash memory with analog computation to enable low-power, high-performance AI inference. Each AMP tile includes Mythic ACE, which combines flash memory, analog-to-digital converters (ADCs), a digital subsystem with a 32-bit RISC-V processor, a SIMD vector engine, and a high-throughput network-on-chip (NoC). This design allows Mythic’s chips to deliver up to 25 TOPS of AI performance while minimizing power consumption and thermal challenges.
Groq
Groq (not to be confused with Grok, Elon Musk's AI chatbot) is led by ex-Google engineer Jonathan Ross who previously designed Google's tensor processing unit (TPU). The startup develops tensor streaming processors (TSPs) which are optimized for AI workloads in data centers.
The firm began life with the idea of a software-first approach to designing AI hardware and this led to the creation of its Language Processing Unit (LPU), which delivers super-fast speeds for AI applications. The first public demo of Groq was a lightning-fast Q&A engine that generated AI answers with hundreds of words in less than a second.
The company currently offers GroqCloud, which allows developers and enterprises to quickly build AI applications, and GroqRack, which provides various interconnected rack configurations for on-prem deployment.
You can try out Groq and get an idea of its speed at GroqChat.
Blumind
Blumind, a Canadian analog AI chip startup established in 2020, focuses on creating high-performance, low-power AI solutions for edge computing applications in areas such as wearables, smart home devices, industrial automation, and mobility.
By relying on standard CMOS technology, Blumind’s approach enables always-on AI processing with minimal power consumption. The startup’s first product, an analog keyword spotting chip, is set for volume production in 2025 and will be available as both a standalone chip and a chiplet that integrates into microcontroller units.
Looking to the future, Blumind says it hopes to scale its analog architecture and create vision CNNs and, ultimately, gigabit-sized small language models (SLMs).
Lightmatter
Lightmatter looks to lead the way in photonic computing. Based in Boston, the startup is developing AI hardware that uses light for faster and more energy-efficient data processing. Its CEO, Nicholas Harris, PhD, has over 30 patents and 70 publications in journals, with a focus on quantum and classical information processing with integrated photonics.
Lightmatter's first product, the Passage 3D Silicon Photonics Engine, is described as the world’s fastest interconnect, supporting configurations from single-chip to wafer-scale systems.
The startup's next-generation AI computing platform, Envise, integrates photonics and electronics in a compact design. It supports standard deep learning frameworks and model formats, offering a complete set of tools, including a compiler and runtime, to deliver high inference speeds and accuracy.
Untether AI
Untether AI, founded in 2018 and based in Toronto, Canada, specializes in developing high-performance AI chips designed to accelerate AI inference workloads. Its "at-memory" compute architecture minimizes data movement by placing processing elements adjacent to memory, significantly enhancing both efficiency and performance.
For OEMs and on-prem data centers, it offers a selection of AI accelerator cards in the PCI-Express form factor, including the low-profile, 75-watt TDP speedAI240 Slim Accelerator Card that features the company’s next-generation speedAI240 IC for high performance and reduced power consumption.
Untether AI’s Inference Accelerator ICs include the speedAI240 which has over 1,400 custom RISC-V processors in its unique at-memory architecture. These devices are specifically designed for AI inference workloads and can deliver up to 2 PetaFlops of inference performance and up to 20 TeraFlops per watt.
Enfabrica
Enfabrica wants to revolutionize networking for the age of Gen AI. The Mountain View, California, startup makes Accelerated Compute Fabric SuperNIC (ACF-S) silicon and system-level solutions designed from the ground up to connect and efficiently move data across all endpoints in modern AI data center infrastructure.
Under the leadership of Rochan Sankar, formerly a Senior Director at Broadcom, Enfabrica announced the release of the “world’s fastest” GPU Network Interface Controller chip in November 2024. The ACF SuperNIC, designed with high-radix and high-bandwidth capabilities, delivers an impressive 3.2 Tbps bandwidth per accelerator and will be available in limited quantities starting Q1 2025.
With 800-, 400-, and 100-Gigabit Ethernet interfaces, 32 high-radix network ports, and 160 PCIe lanes on a single ACF-S chip, this technology makes it possible to build AI clusters with more than 500,000 GPUs. The design uses a two-tier network to provide high scale-out throughput and low end-to-end latency across all GPUs in the cluster.
Celestial AI
Celestial AI, based in Santa Clara, California, is one of several startups working to redefine optical interconnects. Its Photonic Fabric technology is designed to disaggregate AI compute from memory, offering what the company describes as a “transformative leap in AI system performance” that is ten years ahead of existing technologies.
Celestial AI claims its Photonic Fabric is the only solution in the industry capable of shattering the memory wall, and delivering data directly to the compute location. It supports current HBM3E and upcoming HBM4 bandwidth and latency demands while maintaining ultra-low power consumption in the single-digit pJ/bit range.
It’s not only us that think Celestial AI should be on your radar for 2025, as the company was deservedly recognized with the “Start-Up to Watch” award at the GSA Awards Celebration in December 2024.
SambaNova Systems
SambaNova, led by co-founder and CEO Rodrigo Liang, who was previously responsible for SPARC Processor and ASIC Development at Oracle, is a Softbank-funded company building reconfigurable dataflow units (RDU). SambaNova describes its RDUs as “the GPU alternative” for accelerating AI training and inference, particularly in enterprise and cloud environments.
Powered by the SambaNova Suite, Samba-1 was the startup’s first trillion-parameter generative AI model. It uses a Composition of Experts (CoE) architecture that aggregates multiple smaller "expert" models into a single, larger solution.
For Generative AI development, there’s the SambaNova DataScale, claimed to be the “world's fastest hardware platform for AI”, powered by the SN40L RDU, which has a three-tiered memory architecture and a single system node supporting up to 5 trillion parameters.
SambaNova was named the “Most Respected Private Semiconductor Company” at the GSA Awards Celebration in 2024, recognizing its industry-leading products, visionary approach, and promising future opportunities. (Tenstorrent was a runner-up in the same category).
Hailo
Hailo, based in Tel Aviv, Israel, develops what it claims are the world’s best edge AI processors, uniquely designed for high-performance deep learning applications on edge devices.
Hailo’s co-processors, including Hailo-8 and Hailo-10H, integrate with edge platforms and support a wide range of neural networks, vision transformer models, and LLMs. In addition to those, the company offers the Hailo-15 AI Vision Processor (VPU), which is an AI-centric camera SoCs that delivers high-performance AI processing for tasks such as image enhancement, AI-driven video pipelines, and advanced analytics.
Applications for Hailo’s processors include automotive, security, industrial automation, retail, and personal computing. The startup has also announced several partnerships for its processors, including with the Raspberry Pi Foundation to supply it with AI accelerators for the Raspberry Pi AI Kit, an add-on for the Raspberry Pi 5.
You might also like
Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.