As the demand for AI chips increases due to the popularity of large language models, there is a corresponding increase in the demand for compute memory. SK Hynix, a South Korea-based company that supplies memory chips to AI vendors like Nvidia, has announced plans to invest approximately $75 billion in AI chips by 2028.
This announcement, made on June 30, follows South Korea’s decision last month to provide financial support totaling about $19 billion to eligible chipmakers. SK Hynix intends to allocate about 80% of the $75 billion investment to high bandwidth memory (HBM) chips, which consume less energy and allow for the stacking of different layers of AI chips.
Other memory chip providers such as Micron and Samsung, along with SK Hynix, have already sold out their chips for 2024 and 2025, leading to a high demand for AI memory chips. This demand is driven by the increasing integration of AI technology in various applications, including mobile devices and contact centers.
Despite tech giants like Google, Meta, and Microsoft purchasing thousands of GPUs from Nvidia, there is still a requirement for sufficient memory within AI systems. This necessity for access to compute and memory is prompting SK Hynix and competitors like Samsung and Micron to expand production of their AI memory chips.
For example, Micron is constructing HBM testing and mass production lines in Idaho, having received $6.1 billion from the U.S. CHIPS and Science Act. Samsung is also beginning construction on a new semiconductor plant after facing delays.
The competition for memory chips highlights the importance of these components in AI technology. This trend benefits not only vendors like Nvidia, Intel, and AMD, but also suppliers of power and memory for data centers.
Overall, the rapid development of generative AI technology presents a lucrative opportunity for the memory chip industry, with memory being one of the most in-demand components in the current chip market.