DreamBig World Leading "MARS" Open Chiplet Platform Enables Scaling of Next Generation Large Language Model (LLM), Generative AI, and Automotive Semiconductor Solutions

This Open Chiplet Platform enables any customer to efficiently scale-up their choice of compute/accelerator Chiplets for applications such as AI training and inference, Automotive, Datacenter, and Edge. DreamBig technology demonstration is showcased at CES 2024 in The Venetian Expo, Bellini 2003 Meeting Room

SAN JOSE, Calif., Jan. 8, 2024 /PRNewswire/ -- DreamBig Semiconductor, Inc. today unveiled "MARS", a world leading platform to enable a new generation of semiconductor solutions using open standard Chiplets for the mass market. This disruptive platform will democratize silicon by enabling startups or any size company to scale-up and scale-out LLM, Generative AI, Automotive, Datacenter, and Edge solutions with optimized performance and energy efficiency.

DreamBig "MARS" Chiplet Platform allows customers to focus investment on the areas of silicon where they can differentiate to have competitive advantage and bring a solution to market faster at lower cost by leveraging the rest of the open standard chiplets available in the platform. This is particularly critical for the fast moving AI training and inference market where the best performance and energy efficiency can be achieved when the solution is application specific.

"DreamBig is disrupting the industry by providing the most advanced open chiplet platform for customers to innovate never before possible solutions combining their specialized hardware chiplets with infrastructure that scales up and out maintaining affordable and efficient modular product development," said Sohail Syed, CEO of DreamBig Semiconductor.

DreamBig "MARS" Chiplet Platform solves the two biggest technical challenges facing HW developers of AI servers and accelerators – scaling up compute and scaling out networking. The Chiplet Hub is the most advanced 3D memory first architecture in the industry with direct access to both SRAM and DRAM tiers by all compute, accelerator, and networking chiplets for data movement, data caching, or data processing. Chiplet Hubs can be tiled in a package to scale-up at highest performance and energy efficiency. RDMA Ethernet Networking Chiplets provide unparalleled scale-out performance and energy efficiency between devices and systems with independent selection of data path BW and control path packet processing rate.

"Customers can now focus on designing the most innovative AI compute and accelerator technology chiplets optimized for their applications and use the most advanced DreamBig Chiplet Platform to scale-up and scale-out to achieve maximum performance and energy efficiency," said Steve Majors, SVP of Engineering at DreamBig Semiconductor. "By establishing leadership with 3D HBM backed by multiple memory tiers under HW control in Silicon Box advanced packaging that provides highest performance at lowest cost without the yield and availability issues plaguing the industry, the barriers to scale are eliminated."

The Platform Chiplet Hub and Networking Chiplets offer the following differentiated features:

  • Open standard interfaces and architecture agnostic support for CPU, AI, Accelerator, IO, and Memory Chiplets that customers can compose in a package
  • Secure boot and management of chiplets as a unified system-in-package similar to a platform motherboard of chips
  • Memory First Architecture with direct access from all chiplets to cache/memory tiers including low-latency SRAM/3D HBM stacked on Chiplet Hubs and high-capacity DDR/CXL/SSD on chiplets
  • FLC Technology Group fully associative HW acceleration for cache/memory tiers
  • HW DMA and RDMA for direct placement of data to any memory tier from any local or remote source
  • Algorithmic TCAM HW acceleration for Match/Action when scaled-out to cloud
  • Virtual PCIe/CXL switch for flexible root port or endpoint resource allocation
  • Optimized for Silicon Box advanced Panel Level Packaging to achieve the best performance/power/cost – an alternative to CoWoS for the AI mass market

Customers are currently working with DreamBig on next generation devices for the following use cases:

  • AI Servers and Accelerators
  • High-end Datacenter and Low-end Edge Servers
  • Petabyte Storage Servers
  • DPUs and DPU Smart Switches
  • Automotive ADAS, Infotainment, Zonal Processors

"We are very proud of what DreamBig has achieved establishing leadership in driving a key pillar of the market for high performance, energy conscious, and highly scalable AI solutions to serve the world," stated Sehat Sutardja and Weili Dai, Co-founders and Chairman/Chairwoman of DreamBig. "The company has raised the technology bar to lead the semiconductor industry by delivering the next generation of open chiplet solutions such as Large Language Model (LLM), Generative AI, Datacenter, and Automotive solutions for the global mass market."

To learn more, come see DreamBig technology solution demo January 9-12 at CES 2024 - The Most Powerful Tech Event in the World in The Venetian Expo, Bellini 2003 Meeting Room.

About DreamBig Semiconductor

DreamBig, founded in 2019, is developing a disruptive and world leading Chiplet Platform that enables customers to bring to market the next generation of high performance, energy conscious, affordable, scalable, and composable semiconductor chiplet solutions for the AI revolution and digital world. The company specializes in high performance applications used for Large Language Model (LLM), Generative AI, Datacenter, Edge, and Automotive markets.

DreamBig provides the industry's most advanced Chiplet Hub to scale-up compute/accelerator Chiplets, and the most advanced Networking Chiplets to scale-out.

 

SOURCE DreamBig Semiconductor, Inc.

For further information: Lauren Wanthal, lauren.wanthal@theacceleration.com