Data is the “new oil” for modern tech, transforming countless industries and providing invaluable insight as organizations leverage artificial intelligence (AI) and machine learning. But this data-rich future—where information once bound for cold storage becomes an actionable, strategic asset—comes with challenges. More data must be stored safely at reasonable cost over longer time spans, even as enterprises forge a data foundation layer to transform every type of data they own from a liability to be stored and defended into an asset to be leveraged.
Enterprises need the right storage infrastructure to manage this transition and unlock the potential value in their data. In this blog post, we outline how storage has evolved to combat the challenges of AI, ML, and big data and how the new generation of data storage offers a better solution than traditional stacks.
What ML and Big Data Need
To make a successful data storage layer for AI and ML operations using large amounts of data, your infrastructure must provide:
- High performance: Data is created and consumed by multiple users and devices concurrently across multiple applications, and in some cases (like IoT), with thousands or millions of sensors creating unstoppable flows of structured and unstructured data.
- High capacity: Petabyte and exabyte-scale systems are becoming common in very large organizations across all industries.
- Easy access: You need systems that can be accessed remotely, across long distances, while weathering unpredictable network latency. And systems must manage large capacities and lots of files in a single domain without a trade-off.
- Intelligence: Rich metadata is a fundamental component for making data indexable, identifiable, searchable, and eventually, reusable. The Extract, Transform and Load (ETL) phase should ideally be automated. Offloading this process to the storage system simplifies these operations and makes data easier to find and quickly reusable.
Building a Better System
It is tough to find all of these characteristics in a traditional storage system. In fact, they look incompatible at first glance. Often, we must stack several different technologies to accomplish this:
- All-flash storage enables high-performance and low-latency access to data
- Object storage makes data accessible from everywhere
- External resources necessary for metadata augmentation, indexing, and search operations enable rich interaction
Rather than create a complicated stack, a new answer has emerged over the last few years: Next-Generation Object Storage. This solution uses all-flash and hybrid (flash and spinning media) object stores to combine the characteristics of traditional object stores with those usually found in block and file storage. The result:
- High performance: Flash memory-optimized systems are capable of handling small and large files alike, improving throughput with low latency and parallelism.
- Smart: Integration with message brokers and serverless frameworks with the ability to send event notifications to trigger functions enables the system to understand and augment what is stored while it is ingesting data.
- Analytics tools integration: Standard, custom, and augmented metadata is indexed automatically with tools like Elasticsearch. A growing number of data analytics tools, like Apache Spark for example, can directly leverage Amazon S3 interfaces to access data.
- Efficiency: Internal tiering mechanisms automate resource optimization for information lifecycle management (ILM). ILM makes next-generation object stores more cost-effective than public clouds.
- Multi-tenancy: A single object store can serve disparate workloads, for example supporting ML workloads alongside pure, capacity-driven applications that require lower performance (such as backup or archiving).
- Multi-cloud integration: Modern object stores can leverage public cloud resources and form an active part of a broad hybrid cloud strategy.
Conclusion
The challenges posed by AI and ML to data infrastructure have been resolved to some extent by the new generation of object stores.
Object storage now offers much more than it did in the past. It can offload several tasks from the rest of the infrastructure. It is faster and can form the data foundation layer for today’s capacity needs and tomorrow’s next-generation and cloud-native applications. Finally, next-generation object stores make it easier to implement new initiatives based on ML and AI workloads. It allows for a quick start with the potential to grow and evolve the infrastructure as required by the business.
from Gigaom https://gigaom.com/2020/07/21/the-evolution-of-ml-infrastructure/
Comments
Post a Comment