-

Untether AI Ushers in the PetaOps Era with At-Memory Computation for AI Inference Workloads

tsunAImitm accelerator card packs 2 PetaOps of performance in a PCI-Express form factor

Powered by the runAI200 chip, the industry’s first at-memory computation engine offering unrivaled 8 TOPs/W efficiency

TORONTO--(BUSINESS WIRE)--Today at the fall Linley Processor Conference, Untether AITM unveiled its tsunAImiTM accelerator cards powered by the runAITM devices. Using at-memory computation, Untether AI breaks through the barriers of traditional von Neumann architectures, offering industry-leading compute density with power and price efficiency.

The Need for Speed

Artificial Intelligence (AI) workloads for inference require increasing amounts of compute resources, far outstripping the gains available to traditional CPU and GPU architectures. The slowing of Moore’s Law and the end of Dennard scaling further limits future gains in performance from traditional computing approaches. Solving this dilemma is important, as inference acceleration in the datacenter, using AI accelerators, is estimated to be a $10 billion market by 2025, according to McKinsey & Company. Untether AI was founded to radically rethink how computation for machine learning is accomplished. In current architectures, 90 percent of the energy for AI workloads is consumed by data movement, transferring the weights and activations between external memory, on-chip caches, and finally to the computing element itself. By focusing on the needs for inference acceleration and maximizing power efficiency, Untether AI is able to deliver two PetaOperations per second (POPs) in a standard PCI-Express card form factor.

“For AI inference in cloud and datacenters, compute density is king. Untether AI is ushering in the PetaOps era to accelerate AI inference workloads at scale with unprecedented efficiency,” said Arun Iyengar, CEO of Untether AI.

The Most Efficient AI Compute Engine Available – runAI200 Devices

Tailored for inference acceleration, runAI200 devices operate using integer data types and a batch mode of 1. At the heart of the unique at-memory compute architecture is a memory bank: 385KBs of SRAM with a 2D array of 512 processing elements. With 511 banks per chip, each device offers 200MB of memory and operates up to 502 TeraOperations per second in its “sport” mode. It may also be configured for maximum efficiency, offering 8 TOPs per watt in “eco” mode. runAI200 devices are manufactured using a cost-effective, mainstream 16nm process.

“As AI compute requirements continue to explode, new architectures are needed to meet these demands,” said Linley Gwennap, principal analyst, The Linley Group. “Untether AI’s runAI200 devices, with their innovative at-memory compute architecture, break through traditional von Neumann architecture bottlenecks and represent a new breed of AI accelerators.”

2 PetaOps at the Lowest Price per TOP- tsunAImi Accelerator Cards

tsunAImi accelerator cards are powered by four runAI200 devices, providing 2 POPs of compute, more than two times any currently announced PCIe cards. This compute power translates into over 80,000 frames per second of ResNet-50 v 1.5 throughput at batch=1, three times the throughput of its nearest competitor. For natural language processing, tsunAImi accelerator cards can process more than 12,000 queries per second (qps) of BERT-base, four times faster than any announced product.

“When we founded Untether AI, our laser focus was unlocking the potential of scalable AI, by delivering more efficient neural network compute,” said Martin Snelgrove, co-founder and CTO of Untether AI. “We are gratified to see our technology come to fruition.”

Simple, Automatic Tool Flow – the imAIgine Software Development Kit

Until now, making neural networks perform optimally has been a manual process. The Untether AI imAIgineTM Software Development Kit (SDK) provides an automated path to running networks at high performance, with push-button quantization, optimization, physical allocation, and multi-chip partitioning. The imAIgine SDK frees data scientists from having to perform low-level optimization tasks and instead spend time where it matters to them – crafting their models. The imAIgine SDK also provides an extensive visualization toolkit, cycle-accurate simulator, and an easily integrated runtime API.

Availability

The imAIgine SDK is currently in Early Access (EA) with select customers and partners. The tsunAImi accelerator card is sampling now and will be commercially available in 1Q2021.

About Untether AI

Untether AI provides ultra-efficient, high-performance AI chips to enable new frontiers in AI applications. By combining the power efficiency of at-memory computation with the robustness of digital processing, Untether AI has developed a groundbreaking new chip architecture for neural net inference that eliminates the data movement bottleneck that costs energy and performance in traditional architectures. Founded in Toronto in 2018, Untether AI is funded by Radical Ventures and Intel Capital. www.untether.ai.

All reference to Untether AI trademarks are the property of Untether.AI. All other trademarks mentioned herein are the property of their respective owners.

Contacts

Media Contact for Untether AI:
Michelle Clancy, Cayenne Global, +1.503.702.4732
michelle.clancy@cayennecom.com

Company Contact:
Robert Beachler, Untether AI, +1.650.793.8219
beach@untether.ai

Connect with Untether AI:
Twitter: @UntetherAI
LinkedIn: https://www.linkedin.com/company/untether-ai/

Untether AI


Release Summary
Untether AI Ushers in the PetaOps Era with At-Memory Computation for AI Inference Workloads
Release Versions

Contacts

Media Contact for Untether AI:
Michelle Clancy, Cayenne Global, +1.503.702.4732
michelle.clancy@cayennecom.com

Company Contact:
Robert Beachler, Untether AI, +1.650.793.8219
beach@untether.ai

Connect with Untether AI:
Twitter: @UntetherAI
LinkedIn: https://www.linkedin.com/company/untether-ai/

More News From Untether AI

Untether AI with AI Platform Alliance Unveils AI-Powered Intelligent Video Solution at ISC West 2025

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-centric AI inference acceleration, is teaming up with AI Platform Alliance members Ampere® Computing, NETINT, ZoneMinder, AVC Group, and ASA Computers to introduce the Intelligent Video Recording (IVR) solution at ISC West 2025, the premier security industry tradeshow. This groundbreaking AI-powered system sets a new benchmark in video surveillance, delivering up to 8x better AI camera efficiency while operating in an eco-friendly serve...

Untether AI Dramatically Expands AI Model Support and Speeds Developer Velocity with New Generative Compiler Technology

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-centric AI inference acceleration today introduced a breakthrough in AI model support and developer velocity for users of the imAIgine® Software Development Kit (SDK). Using a breakthrough generative compiler technology, the upcoming release the imAIgine SDK will support 4 times more AI models than the previous releases. Additionally, for new neural networks users may architect, the generative compiler creates new kernels for these laye...

Untether AI and Vertical Data Join Forces to Revolutionize AI-Centric Modular Data Centers

TORONTO--(BUSINESS WIRE)--Untether AI®, a leader in energy-efficient AI inference acceleration, today announced its partnership with Vertical Data, recognized for its cutting-edge infrastructure solutions that enhance computing capabilities at the source of data generation. This collaboration aims to advance modular and portable data center solutions, enabling faster, more secure, and highly efficient AI-driven computing. Untether AI’s industry-leading AI inference accelerators, including the h...
Back to Newsroom