Designing Shared Storage for Hadoop, Elastic, Kafka, TensorFlow. As analytics environments like Hadoop, Elastic, Kafka and TensorFlow continue to scale, organizations need to find a way to create a shared infrastructure that can deliver the bandwidth, flexibility, and efficiency that these environments need. In a recent Storage Intensity podcast, Tom Lyon, founder and chief scientist of DriveScale and George Crump, Lead Analyst of Storage Switzerland, sat down to discuss a wide range of subjects including Non-Volatile Memory express, (NVMe), Non-Volatile Memory express over Fabric (NVMe-oF), and Composable Infrastructures.
It is hard to remember that for decades, whether a system was large or small, its storage was intricately and inescapably linked to its compute. Network attached storage, as pioneered by NetApp, helped break those links between compute and storage in the enterprise at the file level. But it was the advent of storage area networks that allowed for storage to still be reasonably tightly coupled to servers and work at the lower block level, below file systems, while at the same time allowing that storage to scale independently from the number of disks you might jam into a single box. CONTINUE READING