financetom
Technology
financetom
/
Technology
/
Software-Defined Memory (SDM) Pioneer Kove Announces Benchmarks Showing 5x Larger AI Inference Workloads on Redis & Valkey, With Lower Latency Than Local Memory
News World Market Environment Technology Personal Finance Politics Retail Business Economy Cryptocurrency Forex Stocks Market Commodities
Software-Defined Memory (SDM) Pioneer Kove Announces Benchmarks Showing 5x Larger AI Inference Workloads on Redis & Valkey, With Lower Latency Than Local Memory
Sep 9, 2025 11:52 AM

CHICAGO, Sept. 9, 2025 /PRNewswire/ -- Kove, creator of software-defined memory Kove:SDM™, today announced benchmark results proving that Redis and Valkey — two of the most widely used engines in AI inference — can run up to 5x larger workloads faster than local DRAM in a majority of cases when powered by remote memory via Kove:SDM™. Kove:SDM™ is the world's first and only commercially available software-defined memory solution. Using pooled memory, running on any hardware supported by Linux, Kove:SDM™ allows technologists to dynamically right-size memory according to need. This enables better inference-time processing, faster time to solution, increased memory resilience, and substantial energy reductions. The announcement, made during CEO John Overton's keynote at AI Infra Summit 2025, identifies memory, not compute, as the next bottleneck for scaling AI Inference.

While GPUs and CPUs continue to scale, traditional DRAM remains fixed, fragmented, and underutilized — stalling inference workloads, creating unnecessary and repetitive GPU processing, and driving up costs due to inefficient GPU utilization. Tiering to NVMe storage can aid in reducing GPU recomputation but also delivers greatly reduced performance compared to system memory. Instead, Kove:SDM™ virtualizes memory across servers, creating much larger elastic memory pools that behave and perform like local DRAM to break through the "memory wall" that continues to constrain scale-out AI inference. With Kove:SDM™, KV cache can remain in memory without suffering the performance degradation from tiering across HBM, memory, and storage. What happens in memory, stays in memory.

Redis and Valkey Benchmarks: Proof for AI Inference

Kove:SDM™ improves capacity and latency compared to local server memory. Independent benchmarks ran on the same server — first without Kove:SDM™, and then with Kove:SDM™ — across Oracle Cloud Infrastructure. With 5x more memory from Kove:SDM™, server performance compared to local memory was: 

Redis Benchmark (v7.2.4)

50th Percentile: SET 11% faster, GET 42% faster 100th Percentile: SET 16% slower, GET 14% faster Valkey Benchmark (v8.0.4)

50th Percentile: SET 10% faster, GET 1% faster100th Percentile: SET 6% faster, GET 25% faster "These results show that software-defined memory can directly accelerate AI inference by eliminating KV cache evictions and redundant GPU computation," said John Overton, CEO of Kove. "Every GET that doesn't trigger a recompute saves GPU cycles. For large-scale inference environments, that translates into millions of dollars saved annually."

Redis has long powered caching workloads across industries. A new branch to Redis, Valkey is now widely integrated into vLLMs, making it central to modern inference pipelines. By expanding KV Cache capacity and improving performance, Kove:SDM™ directly addresses one of AI's most urgent challenges.

Structural Business Impact

Enterprises deploying AI at scale stand to benefit from significant financial savings:

$30–40M+ annual savings typical for large-scale deployments.20–30% lower hardware spend by deferring costly high-memory server refreshes.25–54% lower power and cooling costs from improved memory efficiency.Millions in avoided downtime by eliminating memory bottlenecks that cause failures."Kove has created a new category — software-defined memory — that makes AI infrastructure both performant and economically sustainable," said Beth Rothwell, Director of GTM Strategy at Kove. "It's the missing layer of the AI stack. Without it, AI hits a wall. With it, AI inference scales, GPUs stay busy doing the right work, and enterprises save tens of millions."

Why It Matters Now

AI demand is doubling every 6–12 months, while DRAM budgets cannot keep pace. Existing solutions like tiered KV caching or storage offload reduce efficiency or add latency. By contrast, Kove:SDM™ pools DRAM across servers while delivering local memory performance. KV cache tiering to storage can be 100-1000x less performant than Kove:SDM™. 

Kove:SDM™ is available now, deploys without application or code changes, and runs on any x86 hardware supported by Linux. 

About Kove

Founded in 2003, Kove has a long history of solving technology's most vexing problems, from launching high-speed back-ups for large databases to setting sustained storage speed records. Kove invented pioneering technology using distributed hash tables for locality management. This breakthrough technology created unlimited scaling, and enabled the advent of cloud storage, scale-out database sharding, among other markets. Most recently, after years of development, testing and validation, Kove delivers the world's first patented and mature software-defined memory solution – Kove:SDM™. Kove:SDM™ enables organizations and their leaders to achieve more by maximizing the performance of their people and infrastructure. Kove's team of passionate software engineers and technologists understands the importance of access to high performance computing technology and has worked with clients across a variety of industry verticals from financial services and life sciences to energy and defense. Kove is committed to delivering the products and personalized service that enable every enterprise to reach its full potential. To learn more, visit kove.com.

Contact: CJ Martinez, 1-310-980-5431, [email protected] 

View original content to download multimedia:https://www.prnewswire.com/news-releases/software-defined-memory-sdm-pioneer-kove-announces-benchmarks-showing-5x-larger-ai-inference-workloads-on-redis--valkey-with-lower-latency-than-local-memory-302550892.html

SOURCE Kove

Comments
Welcome to financetom comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Show More Comments
Related Articles >
Bragg Gaming Group Reports Record Fourth Quarter and Full Year 2025 Revenues; Welcomes Accomplished iGaming Executive, Thomas Winter, to Board
Bragg Gaming Group Reports Record Fourth Quarter and Full Year 2025 Revenues; Welcomes Accomplished iGaming Executive, Thomas Winter, to Board
Mar 19, 2026
TORONTO--(BUSINESS WIRE)-- Bragg Gaming Group ( BRAG ) (“Bragg” or the “Company”), a leading iGaming content and platform technology solutions provider, today announced its financial results for the fourth quarter of 2025. Fourth Quarter 2025 Financial Highlights: Revenue Growth: Record total quarterly revenue of €27.7 million in the fourth quarter: Revenue increase of 5.1% (excluding The Netherlands) compared to the...
Powerfleet to Present at the 38th Annual Roth Conference
Powerfleet to Present at the 38th Annual Roth Conference
Mar 19, 2026
WOODCLIFF LAKE, N.J., March 19, 2026 /PRNewswire/ -- Powerfleet, Inc. ( AIOT ) today announced that management is scheduled to meet with investors at the 38th Annual Roth Conference on Monday and Tuesday, March 23rd and 24th, to discuss Powerfleet's ( AIOT ) scaled AIoT platform, strong financial performance and clear roadmap to shareholder value. The link to the live...
Myseum Granted U.S. Patent for Core Private and Encrypted Instant Social Networking and Media Technology
Myseum Granted U.S. Patent for Core Private and Encrypted Instant Social Networking and Media Technology
Mar 19, 2026
NEW BRUNSWICK, N.J., March 19, 2026 (GLOBE NEWSWIRE) -- Myseum, Inc. ( MYSE ) (“Myseum” or the “Company”), a privacy-first social media and technology innovator, today announced that it was granted U.S. Patent #12,585,755 by the U.S. Patent and Trademark Office (USPTO), titled “Time Bound Event Creation and Management Based on User Specific Media Permissions.” The invention relates to systems...
Axcelis Announces Participation in SEMICON China 2026
Axcelis Announces Participation in SEMICON China 2026
Mar 19, 2026
President and CEO Russell Low will Present Keynote Speech at CS Asia BEVERLY, Mass., March 19, 2026 /PRNewswire/ -- Axcelis Technologies, Inc. ( ACLS ) , a leading supplier of enabling ion implantation solutions for the semiconductor industry, announced today that it will be the Diamond Sponsor for the Compound Semiconductor Asia Conference (CS Asia) 2026, held in conjunction with...
Copyright 2023-2026 - www.financetom.com All Rights Reserved