
🔹 Experienced Sales and Business Development Leader | Memory and Supply Chain Consultant | Tech and Semiconductor Expert 🔹
With over 16 years in tech sales and business development, I have helped companies across the semiconductor, memory, and high-performance computing industries grow revenue, build strategic partnerships, and scale globally.
I build and lead scenario-based planning frameworks that help companies prepare for what-if situations across their component supply chains. These models simulate risks like capacity constraints, price fluctuations, and supplier disruption. Thus enabling faster, more informed decision-making.
Today, I provide consultative services focused on memory solutions and supply chain strategy. I help businesses optimize sourcing, manage operational risk, and align technical solutions with long-term growth objectives.
Whether you are a startup or an established industry player, I work with companies to uncover opportunity gaps, solve complex supply challenges, and implement sales strategies that drive measurable impact.
Let’s connect and discuss how I can help support your growth.
When you email me through this website, I only use your contact details and the information you provide to respond to your inquiry. Your personal information will not be sold, shared, or distributed to third parties without your consent, unless required by law.
I take reasonable steps to keep your data secure, but please note that email is not always a fully secure method of communication. If you prefer, you may request an alternative way to share sensitive information.
By contacting me, you consent to the collection and use of your information for the purpose of communication.
AI and Machine Learning
Advised clients targeting AI, ML and HPC applications on scenario based planning for supply chain and sales.
TSMC CoWoS is a bottleneck: NVIDIA, AMD, and others rely on TSMC for GPU + HBM packaging. Shortages in CoWoS capacity have already constrained shipments of H100 and MI300.
Alternatives: Samsung pushes I-Cube, Intel uses EMIB and Foveros. But TSMC CoWoS is considered the most mature and scalable at high volumes.
Future: For HBM4 and beyond, CoWoS-L (larger interposers), InFO-SoIC, and hybrid bonding will further integrate compute + memory.
High Bandwidth Memory
HBM can accelerate CPUs with wide SIMD (Intel Xeon with HBM, Fujitsu A64FX). Here it helps keep vector units saturated, especially in HPC codes (climate modeling, CFD, genomics).
HBM is the ideal partner for SIMT execution thousands of threads can issue memory requests without starving compute units. Crucial for training large AI models (LLMs with hundreds of billions of parameters).
HBM is the firehose of data that trains AI. SIMT thrives when thousands of threads stay flooded with data.
GDDR6, GDDR6x,GDDR7
GDDR is a specialized form of DRAM designed for high bandwidth, optimized signaling, and graphics/parallel workloads. Current generations (GDDR6, GDDR6X, and soon GDDR7) hit 20–32 Gbps per pin, delivering hundreds of GB/s to over 1 TB/s of bandwidth per GPU.
Cloud providers also deploy GDDR-based accelerators for inference, where bandwidth demands are high but not always at HBM levels.
Startups and second-tier vendors often choose GDDR over HBM because it is cheaper and easier to source, while still providing very high throughput.
Compute using SSDs and DRAM
AI and ML learning starts at the CPU with DRAM and SSDs feeding data into the system.
With longer context windows and heavier workloads the entire datacenter solution must be considered from CPU DRAM and SSD through the GPU and HBM. A weak link anywhere in the chain can bring your training to a halt.