Saturday, October 19, 2024

Four major effects of artificial intelligence on data storage

Artificial intelligence (AI) is experiencing rapid growth as an enterprise technology. Around 42% of firms with over 1,000 employees are currently using AI, with an additional 40% testing or experimenting with it, according to IBM. Generative AI (GenAI) or large language models (LLM) like ChatGPT are driving much of this innovation and are being employed in enterprise applications and customer interactions through chatbots.

Most GenAI systems are currently cloud-based, but efforts are being made to facilitate the integration of LLMs with enterprise data. AI training places significant demands on storage input/output (I/O). The quality of the training correlates with the reliability of the model, and more data typically leads to better results. Therefore, the training phase requires substantial IT infrastructure, including storage. The type of storage used depends on the type of data, with unstructured data often stored on file or object storage, and structured data on block storage. Some projects may utilize all three storage types.

The location of model training also affects storage needs. Cloud storage is typically chosen when training is conducted in a cloud-based model due to the ease of access to compute resources. On the other hand, if data is stored on-premise, local compute may be utilized to maintain control over hardware configuration. GPUs are commonly used in AI models and require storage to keep up with their demands. The choice between local or cloud storage depends on the business’s plans for using the AI model. Cloud storage is more cost-effective if the training phase is short-lived and storage can be scaled down afterwards. However, if data needs to be retained for ongoing training or fine-tuning, on-demand cloud advantages are weakened.

Once a model is trained, its storage requirements typically decrease. AI systems in production run queries through optimized algorithms and are generally more efficient. However, data inputs and outputs are still needed during the operational or inference phase. Low-latency high-performance I/O is essential for effective AI inference, especially in time-sensitive applications like cyber security, threat detection, and IT process automation. Additionally, certain AI applications that aim to replicate human-like interactions require fast response times.

Data management is an ongoing requirement for AI systems. Data scientists aim to access as much data as possible to improve model accuracy. Organizations need to consider various factors in their data and storage management approach, such as storage media (flash or spinning disk), data archiving locations, and data retention policies. AI training and inference phases gather data from multiple sources, including applications, user inputs, and sensors. Data fabrics are being explored as a way to feed AI systems, but performance issues may arise, necessitating the implementation of data fabrics across different storage tiers for cost and performance optimization. Currently, GenAI poses less of a challenge as LLMs are trained on internet data, but this may change as more organizations adopt LLMs with their own data.

Compliance and data security are crucial considerations for enterprises utilizing AI. Data sovereignty concerns are leading to a greater focus on where data is stored during AI training and inference phases, particularly in cloud-based AI services. Organizations must also control the storage of their model’s inputs and outputs. Even for models running on local systems, existing data protection and compliance policies should cover most AI use cases. Nonetheless, it is recommended to design the data that goes into the AI training pool and clearly define what data should and should not be retained in the model. In the case of tools like ChatGPT, it may be acceptable for data to be stored in the cloud and transferred abroad, but proper contract terms should be in place to govern data sharing.