Player FM - Internet Radio Done Right
12 subscribers
Checked 4y ago
Added eight years ago
Content provided by SNIA Technical Council. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SNIA Technical Council or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED
A
All About Change


1 Tiffany Yu — Smashing Stereotypes and Building a Disability-Inclusive World 30:23
30:23
Play Later
Play Later
Lists
Like
Liked30:23
Tiffany Yu is the CEO & Founder of Diversability, an award-winning social enterprise to elevate disability pride; the Founder of the Awesome Foundation Disability Chapter, a monthly micro-grant that has awarded $92.5k to 93 disability projects in 11 countries; and the author of The Anti-Ableist Manifesto: Smashing Stereotypes, Forging Change, and Building a Disability-Inclusive World. As a person with visible and invisible disabilities stemming from a car crash, Tiffany has built a career on disability solidarity. Now that she has found success, she works to expand a network of people with disabilities and their allies to decrease stigmas around disability and create opportunities for disabled people in America. Episode Chapters 0:00 Intro 1:26 When do we choose to share our disability stories? 4:12 Jay’s disability story 8:35 Visible and invisible disabilities 13:10 What does an ally to the disability community look like? 16:34 NoBodyIsDisposable and 14(c) 21:26 How does Tiffany’s investment banking background shape her advocacy? 27:47 Goodbye and outro For video episodes, watch on www.youtube.com/@therudermanfamilyfoundation Stay in touch: X: @JayRuderman | @RudermanFdn LinkedIn: Jay Ruderman | Ruderman Family Foundation Instagram: All About Change Podcast | Ruderman Family Foundation To learn more about the podcast, visit https://allaboutchangepodcast.com/…
#115: Accelerating RocksDB with Eideticom’s NoLoad NVMe-based Computational Storage Processor
Manage episode 247500673 series 1393477
Content provided by SNIA Technical Council. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SNIA Technical Council or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
RocksDB, a high performance key-value database developed by Facebook, has proven effective in using the high data speeds made possible by Solid State Drives (SSDs). By leveraging the NVMe standard, Eideticom’s NoLoad® presents FPGA computational storage processors as NVMe namespaces to the operating system and enables efficient data transfer between the NoLoad® Computational Storage Processors (CSPs), host memory and other NVMe/PCIe devices in the system. Presenting Computational Storage Processors as NVMe namespaces has the significant benefit of minimal software effort to integrate computational resources. In this presentation we use Eideticom’s NoLoad® to speed up RocksDB. Compared to software compaction running on a Dell R7425 PowerEdge server, our NoLoad®, running on Xilinx’s Avleo U280, resulted in 6x improvement in database transactions and 2.5x reduction is CPU usage while reducing worst case latency by 2.7x. Learning Objectives: 1) Computational storage with NVMe; 2) Presenting computational storage processors as NVMe namespaces; 3) Accelerating database access with NVMe computational storage processors.
…
continue reading
146 episodes
Manage episode 247500673 series 1393477
Content provided by SNIA Technical Council. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SNIA Technical Council or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
RocksDB, a high performance key-value database developed by Facebook, has proven effective in using the high data speeds made possible by Solid State Drives (SSDs). By leveraging the NVMe standard, Eideticom’s NoLoad® presents FPGA computational storage processors as NVMe namespaces to the operating system and enables efficient data transfer between the NoLoad® Computational Storage Processors (CSPs), host memory and other NVMe/PCIe devices in the system. Presenting Computational Storage Processors as NVMe namespaces has the significant benefit of minimal software effort to integrate computational resources. In this presentation we use Eideticom’s NoLoad® to speed up RocksDB. Compared to software compaction running on a Dell R7425 PowerEdge server, our NoLoad®, running on Xilinx’s Avleo U280, resulted in 6x improvement in database transactions and 2.5x reduction is CPU usage while reducing worst case latency by 2.7x. Learning Objectives: 1) Computational storage with NVMe; 2) Presenting computational storage processors as NVMe namespaces; 3) Accelerating database access with NVMe computational storage processors.
…
continue reading
146 episodes
All episodes
×S
Storage Developer Conference

Compute Express Link™ (CXL™) is an industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. Datacenter architectures are evolving to support the workloads of emerging applications in Artificial Intelligence and Machine Learning that require a high-speed, low latency, cache-coherent interconnect. The CXL specification delivers breakthrough performance, while leveraging PCI Express® technology to support rapid adoption. It addresses resource sharing and cache coherency to improve performance, reduce software stack complexity, and lower overall systems costs, allowing users to focus on target workloads. Attendees will learn how CXL technology maintains a unified, coherent memory space between the CPU (host processor) and CXL devices allowing the device to expose its memory as coherent in the platform and allowing the device to directly cache coherent memory. This allows both the CPU and device to share resources for higher performance and reduced software stack complexity. In CXL, the CPU host is primarily responsible for coherency management abstracting peer device caches and CPU caches. The resulting simplified coherence model reduces the device cost, complexity and overhead traditionally associated with coherency across an I/O link. Learning Objectives: Learn how CXL supports dynamic multiplexing between a rich set of protocols that includes I/O (CLX.io, based on PCIe®), caching (CXL.cache), and memory (CXL.mem) semantics.,Understand how CXL maintains a unified, coherent memory space between the CPU and any memory on the attached CXL device,Gain insight into the features introduced in the CXL specification…
S
Storage Developer Conference

1 #145: The Future of Accessing Files Remotely from Linux: SMB3.1.1 Client Status Update 45:14
45:14
Play Later
Play Later
Lists
Like
Liked45:14
Improvements to the SMB3.1.1 client on Linux have continued at a rapid pace over the past year. These allow Linux to better access Samba server, as well as the Cloud (Azure), NAS appliances, Windows systems, Macs and an ever increasing number of embedded Linux devices including those using the new smb3 kernel server Linux (ksmbd). The SMB3.1.1 client for Linux (cifs.ko) continues to be one of the most actively developed file systems on Linux and these improvements have made it possible to run additional workloads remotely. The exciting recent addition of the new kernel server also allows more rapid development and testing of optimizations for Linux. Over the past year, performance has dramatically improved with features like multichannel (allowing better parallelization of i/o and also utilization of multiple network devices simultaneously), with much faster encryption and signing, with better use of compounding and improved support for RDMA. Security has improved and alternative security models are now possible with the addition of modefromsid and idsfromsid and also better integration with Kerberos security tooling. New features have been added include the ability to swap over SMB3 and boot over SMB3. Quality continues to improve with more work on 'xfstests' and test automation - tooling (cifs-utils) continue to be extended to make use of SMB3.1.1 mounts easier. This presentation will describe and demonstrate the progress that has been made over the past year in the Linux kernel client in accessing servers using the SMB3.1.1 family of protocols. In addition recommendations on common configuration choices, and troubleshooting techniques will be discussed. Learning Objectives: What new features are now possible when accessing servers from Linux?,What new tools have been added to make it easier to use SMB3.1.1 mounts from Linux?,What new features are nearing completion that you should you expect to see in the near future?,How can I configure the security settings I need to use SMB3.1.1 for my workload?,How can I configure the client for optimal performance for my workload?…
S
Storage Developer Conference

The NVMe Key Value (NVMe-KV) Command Set has been standardized as one of the new I/O Command Sets that NVMe Supports. Additionally, SNIA has standardized a Key Value API that works with the NVMe Key Value allows access to data on a storage device using a key rather than a block address. The NVMe-KV Command Set provides the key to store a corresponding value on non-volatile media, then retrieves that value from the media by specifying the corresponding key. Key Value allows users to access key-value data without the costly and time-consuming overhead of additional translation tables between keys and logical blocks. This presentation will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards, and present open source work that is available to take advantage of Key Value storage. Learning Objectives: Present the standardization of SNIA KV API,Present the standardization of NVMe Key Value Command Set,Present the benefits of Key Valeu in computational storage,Present open source work on Key Value Storage.…
S
Storage Developer Conference

1 #143: Deep Compression at Inline Speed for All-Flash Array 35:18
35:18
Play Later
Play Later
Lists
Like
Liked35:18
The rapid improvement of overall $/Gbyte has driven the high performance All-Flash Array to be increasingly adopted in both enterprises and cloud datacenters. Besides the raw NAND density scaling with continued semiconductor process improvement, data reduction techniques have and will play a crucial role in further reducing the overall effective cost of All-Flash Array. One of the key data reduction techniques is compression. Compression can be performed both inline and offline. In fact, the best All-Flash Arrays often do both: fast inline compression at a lower compression ratio, and slower, opportunistic offline deep compression at significantly higher compression ratio. However, with the rapid growth of both capacity and sustained throughput due to the consolidation of workloads on a shared All-Flash Array platform, a growing percentage of the data never gets the opportunity for deep compression. There is a deceptively simple solution: Inline Deep Compression with the additional benefits of reduced flash wear and networking load. The challenge, however, is the prohibitive amount of CPU cycles required. Deep compression often requires 10x or more CPU cycles than typical fast inline compression. Even worse, the challenge will continue to grow: CPU performance scaling has slowed down significantly (breakdown of Dennard scaling), but the performance of All-Flash Array has been growing at a far greater pace. In this talk, I will explain how we can meet this challenge with a domain-specific hardware design. The hardware platform is a FPGA-based PCIe card that is programmable. It can sustain 5+Gbyte/s of deep compression throughput with low latency for even small data block sizes (TByte/s BW and less than 10ns of latency) and the almost unlimited parallelism available on a modern mid-range FPGA device. The hardware compression algorithm is trained with a vast amount of data available to our systems. Our benchmarks show it can match or outperform some of the best software compressors available in the market without taxing the CPU. Learning Objectives: Hardware Architecture for Inline Deep Compression,Design of Hardware Deep Compression Engine,Inline and offline compression of All-Flash Array.…
S
Storage Developer Conference

1 #142: ZNS: Enabling in-place Updates and Transparent High Queue-Depths 45:23
45:23
Play Later
Play Later
Lists
Like
Liked45:23
Zoned Namespaces represent the first step towards the standardization of Open-Channel SSD concepts in NVMe. Specifically, ZNS brings the ability to implement data placement policies in the host, thus providing a mechanism to lower the write-amplification factor (WAF), (ii) lower NAND over-provisioning, and (iii) tighten tail latencies. Initial ZNS architectures envisioned large zones targeting archival use cases. This motivated the creation of the "Append Command” - a specialization of nameless writes that allows to increase the device I/O queue depth over the initial limitation imposed by the zone write pointer. While this is an elegant solution, backed by academic research, the changes required on file systems and applications is making adoption more difficult. As an alternative, we have proposed exposing a per-zone random write window that allows out-of-order writes around the existing write pointer. This solution brings two benefits over the “Append Command”: First, it allows I/Os to arrive out-of-order without any host software changes. Second, it allows in-place updates within the window, which enables existing log-structured file systems and applications to retain their metadata model without incurring a WAF penalty. In this talk, we will cover in detail the concept of the random write window, the use cases it addresses, and the changes we have done in the Linux stack to support it. Learning Objectives: Learn about general ZNS architecture and ecosystem,Learn about the use cases supported in ZNS and the design decisions in the current specification with regards to in-place updates and multiple inflight I/Os,Learn about new features being brought to NVMe to support in-place updates and transparent hight queue depths.…
S
Storage Developer Conference

1 #141: Unlocking the New Performance and QoS Capabilities of the Software-Enabled Flash API 51:12
51:12
Play Later
Play Later
Lists
Like
Liked51:12
The Software-Enabled Flash API gives unprecedented control to application architects and developers to redefine the way they use flash for their hyperscale applications, by fundamentally redefining the relationship between the host and solid-state storage. Dive deep into new Software-Enabled Flash concepts such as virtual devices, Quality of Service (QoS) domains, Weighted Fair Queueing (WFQ), Nameless Writes and Copies, and controller offload mechanisms. This talk by KIOXIA (formerly Toshiba Memory) will include real-world examples using the new API to define QoS and latency guarantees, workload isolation, minimize write amplification by application-driven data placement, and achieve higher performance with customized flash translation layers (FTL). Learning Objectives: Provide an in-depth dive into using the Software Enabled Flash API,Map application workloads to Software Enabled Flash structures,Understand how to implement QoS requirements using the API.…
S
Storage Developer Conference

The NVM Express workgroup is introducing new features frequently, and the Linux kernel supporting these devices evolves with it. These ever moving targets create challenges when developing tools when new interfaces are created, or older ones change. This talk will provide information on some of these recent features and enhancements, and introduce the open source 'libnvme' project which implements an open source library available in public git repositories that provides access to all NVM Express features with convenient abstractions to the kernel interfaces interacting with your devices. The session will demonstrate integrating the library with other programs, and also provide an opportunity for the audience to share what additional features they would like to see out of this common library in the future. Learning Objectives: Explain protocol and host operating system interaction complexities,Introduce libnvme and how it manages those relationships,Demonstrate integration with applications.…
S
Storage Developer Conference

1 #139: Use Cases for NVMe-oF for Deep Learning Workloads and HCI Pooling 58:29
58:29
Play Later
Play Later
Lists
Like
Liked58:29
The efficiency, performance and choice in NVMe-oF is enabling some very unique and interesting use cases – from AI/ML to Hyperconverged Infrastructures. Artificial Intelligence workloads process massive amounts of data from structured and from unstructured sources. Today most deep learning architectures rely on local NVMe to serve up tagged and untagged datasets into map-reduce systems and neural networks for correlation. NVMe-oF for Deep Learning infrastructures enables a shared data model to ML/DL pipelines without sacrificing overall performance and training times. NVMe-oF is also enabling HCI deployment to scale without adding more compute, enabling end customers to reduce dark flash and reduce cost. The talk explores these and several innovative technologies driving the next storage connectivity revolution. Learning Objectives: Storage architectures for Deep Learning Workloads,Extending the reach of HCI platforms using NVMe-oF,Ethernet Bunch of Flash architectures.…
S
Storage Developer Conference

NVMe is the fastest growing storage technology of the last decade and has succeeded in unifying client, hyperscale and enterprise applications into a common storage framework. NVMe has evolved from a being a disruptive technology to becoming a core element in storage architectures. In this session, we will talk about the NVMe transition to a merged base specification inclusive of both NVMe and NVMe-oF architectures. We will provide an overview of the latest NVMe technologies, summarize the NVMe standards roadmap and describe the latest NVMe standardization initiatives. NVMe technology will present a number of areas of innovation that preserve our simple, fast, scalable paradigm while extending the broad appeal of NVMe architecture. These continued innovations will ready the NVMe technology ecosystem for yet another period of growth and expansion. Learning Objectives: Learn about the NVMe transition to a merged base specification inclusive of both NVMe and NVMe-oF architectures. Receive a summary of the NVMe standards roadmap,Understand the latest NVMe standardization initiatives.…
S
Storage Developer Conference

1 #137: Caching on PMEM: an Iterative Approach 43:29
43:29
Play Later
Play Later
Lists
Like
Liked43:29
With PMEM boasting a much higher density and DRAM-like performance, applying it to in-memory caching such as memcached seems like an obvious thing to try. Nonetheless, there are questions when it comes to new technology. Would it work for our use cases, in our environment? How much effort does it take to find out if it works? How do we capture the most value with reasonable investment of resource? How can we continue to find a path forward as we make discoveries? At Twitter, we took an iterative approach to explore cache on PMEM. With significant early help from Intel, we started with simple tests in memory mode in a lab environment, and moved on to app_direct mode with modifications to Pelikan (pelikan.io), a modular open-source cache backend developed by Twitter. With positive results from the lab runs, we moved the evaluation to platforms that more closely represent Twitter’s production environment, and uncovered interesting differences. With better understanding of how Twitter’s cache workload behaves on the new hardware, and our insight into Twitter’s cache workload in general, we are proposing a new cache storage design called Segcache that, among other things, offers flexibility with storage media and in particular is designed with PMEM in mind. As a result, it achieves superior performance and effectiveness when running either on DRAM or PMEM. The whole exploration was made easier by the modular architecture of Pelikan, and we added a benchmark framework to support the evaluation of storage modules in isolation, which also greatly facilitated our exploration and development. Learning Objectives: Demonstrate the feasibility of using PMEM for caching and meeting production requirements. Provide a case study on how software companies can approach and adopt new technology like PMEM iteratively. Provide observations and suggestions on how to promote a more integral hardware/software design cycle.…
S
Storage Developer Conference

Software-based memory-to-memory data movement is common, but takes valuable cycles away from application performance. At the same time, offload DMA engines are vendor-specific and may lack capabilities around virtualization and user-space access. This talk will focus on how SDXI(Smart Data Acceleration Interface), a newly formed SNIA TWG is working to bring an extensible, virtualizable, forward-compatible, memory to memory data movement and acceleration interface specification. As new memory technologies get adopted and memory fabrics expand the use of tiered memory, data mover acceleration and its uses will increase. This TWG will encourage adoption and extensions to this data mover interface. Learning Objectives: A new proposed standard for a memory to memory data movement interface,A new TWG to develop this standard,Usecases where this will apply to evolving storage architecture with memory pooling and persistent memory…
S
Storage Developer Conference

1 #135: SmartNICs and SmartSSDs, the Future of Smart Acceleration 50:50
50:50
Play Later
Play Later
Lists
Like
Liked50:50
Since the advent of the Smart Phone over a decade ago, we've seen several new "Smart" technologies, but few have had a significant impact on the data center until now. SmartNICs and SmartSSDs will change the landscape of the data center, but what comes next? This talk will summarize the state of the SmartNIC market by classifying and discussing the technologies behind the leading products in the space. Then it will dive into the emerging technology of SmartSSDs and how they will change the face of storage and solutions. Finally, we'll dive headfirst into the impact of PCIe 5 and Compute Express Link (CXL) on the future of Smart Acceleration on solution delivery. Learning Objectives: Understand the current state of the SmartNIC market & leading products.,Introduce the concept of SmartSSDs and two products available today.,Discuss the future of Device to Device (D2D) communications using PCIe, CXL/CCIX.,Lay out a vision for where composable solutions leveraging multiple devices on a PCIe buss communicating directly.…
S
Storage Developer Conference

1 #134: Best Practices for OpenZFS L2ARC in the Era of NVMe 53:47
53:47
Play Later
Play Later
Lists
Like
Liked53:47
The ZFS L2ARC is now more than 10 years old. Over that time, a lot of secret incantations and tribal knowledge have been created by users, testers, developers, and the odd sales or marketing person. That collection of community wisdom informs the use and/or tuning of ZFS L2ARC for certain IO profiles, dataset sizes, server class, share protocols, and device types. In this talk, we will review a case study in which we tested a few of these L2ARC myths on an NVMe-capable OpenZFS storage appliance. Can high-speed NVMe flash devices keep L2ARC relevant in the face of ever-increasing memory capacity for ARC (primary cache) and all-flash storage pools? Learning Objectives: 1) Overview of ZFS L2ARC design goals and high level implementation details that pertain to our findings; 2) Performance characteristics of L2ARC during warming and when warmed, plus any tradeoffs or pitfalls with L2ARC in these states; 3) How to leverage NVMe as L2ARC devices to improve performance in a few storage use cases.…
S
Storage Developer Conference

1 #133: NVMe based Video and Storage solutions for Edged based Computational Storage 40:58
40:58
Play Later
Play Later
Lists
Like
Liked40:58
5G Wireless technology will bring vastly superior data rates to the edge of the network. However, with this increase in bandwidth will come applications that significantly increase overall network throughput. Video applications will likely explode as end users have large amounts of data bandwidth to operate. Video will not only require advanced compression but will require large amounts of data storage. Combining advanced compression technologies with storage will allow a high density of storage and compression in a small amount of rack space with little power, ideal for placement at the edge of the network. NVMe based module provides the opportunity to use computational storage elements to enable edge compute and video compression. This presentation will provide technical details and various options to combine video and storage on an NVMe interface. Further, it will explore how this NVMe device can be virtualized for both storage and video in an edge compute environment. Learning Objectives: 1) Understand how NVMe can be used for both video and storage; 2) Understand how computational storage can be virtualized using NVMe; 3) Understand why combinational element modules such as Video Storage will become important after deployment of 5G networks.…
S
Storage Developer Conference

1 #132: Emerging Scalable Storage Management Functionality 38:53
38:53
Play Later
Play Later
Lists
Like
Liked38:53
By now, you have a good understanding of SNIA Swordfish™ and how it extends the DMTF Redfish® specification to manage storage equipment and services. Attend this presentation to learn what’s new and how the specification has evolved since last year. The speaker will share the latest updates ranging from details of features and profiles to new vendor-requested functionality that’s supporting the specification from direct-attached to NVMe. You won’t want to miss this opportunity to be brought up-to-speed. Learning Objectives: 1) Educate the audience on what’s new with Swordfish; 2) Describe features and profiles and why they are useful; 3) Provide an overview of vendor-requested Swordfish functionality.…
S
Storage Developer Conference

DMTF's Redfish® is a standard API designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Both human readable and machine capable, Redfish leverages common Internet and web services standards to expose information directly to the modern tool chain. This presentation will provide an overview of Redfish, what’s new in the Redfish ecosystem, as well as adoption in the broader standards community. You’ll also learn more about the general Redfish data model, including the base storage models and infrastructure that are used by SNIA Swordfish extensions. Learning Objectives: 1) Introduce the DMTF Redfish API; 2) Provide an update on the latest Redfish developments; 3) Understand how SNIA Swordfish builds on Redfish.…
S
Storage Developer Conference

1 #130: SNIA Nonvolatile Memory Programming TWG 52:52
52:52
Play Later
Play Later
Lists
Like
Liked52:52
The SNIA NVMP TWG continues to make significant progress on defining the architecture for interfacing applications to PM. In this talk, we will focus on the important Remote Persistent Memory scenario, and how the NVMP TWG’s programming model applies. Application use of these interfaces, along with fabric support such as RDMA and platform extensions, are part of this, and the talk will describe how the larger ecosystem fits together to support PM as low-latency remote storage.…
S
Storage Developer Conference

1 #129: So, You Want to Build a Storage Performance Testing Lab? 55:16
55:16
Play Later
Play Later
Lists
Like
Liked55:16
Whether you are a storage vendor, consumer, or developer, the performance of storage solutions affects you. Assessing the performance of large and complex storage solutions requires some level of performance testing lab, and there are many factors to consider. From network topology to load generator CPU, all components must be selected and configured with care to avoid unintended bottlenecks. In this session, we will review a few best practices and lessons learned, including: whether virtual clients are feasible and my experiences attempting performance testing on several different hypervisors, best practices for network configuration, and how to use maximum effective data rates to avoid unintended bottlenecks. Finally, we will conclude with a review of data comparing different physical load generating hardware and its effect on measured performance. Learning Objectives: 1) Effect of load generating client hardware on measured performance; 2) Avoiding unintended bottlenecks by using interconnect maximum effective data rates; 3) Best practices for configuring a performance lab network and load generators.…
S
Storage Developer Conference

SMB 3.1.1 is the state of the art for secure remote file access, but deploying it for clouds and mobile users can be very challenging; TCP/445 is often blocked, networks are often slow, and edge file servers are often feared. The Microsoft SMB3 team has now built the first implementation of SMB3 over QUIC, a UDP/TLS transport pioneered by Google. This allows secure tunneling of SMB3 over internet-friendly ports. Furthermore, we have added compression for SMB3, which allows significant data savings over congested and low bandwidth networks. In this talk we’ll discuss these new options, as well as other recent security and feature capabilities nearing completion. Learning Objectives: 1) SMB3 over new transport; 2) SMB3 over wide area networks; 3) SMB3 protocol update.…
S
Storage Developer Conference

1 #127: Object Storage Workload Testing Tools 47:33
47:33
Play Later
Play Later
Lists
Like
Liked47:33
Attendees of this presentation will learn how to use several open source tools ( https://github.com/jharriga/ ) to evaluate object storage platforms. These tools provide automation and customer-based object storage workloads for activities such as filling a cluster, aging a cluster and running steady-state mixed operation workloads. One of the tools, RGWtest, automates pool creation, logs cluster statistics such as system resource utilization (CPU and memory) and submits workloads through COSbench - Intel’s open source object storage benchmark tool. A demonstration of the tools will be part of the presentation. Learning Objectives: 1) How to install, configure and execute the object storage workload tools; 2) How to interpret workload run results; 3) How to design and size object storage workloads.…
S
Storage Developer Conference

1 #126: Introducing the SNIA Swordfish™ PowerShell Tool Kit and Windows Admin Center Integration 40:27
40:27
Play Later
Play Later
Lists
Like
Liked40:27
PowerShell is a task-based command-line shell and scripting language that helps rapidly automate tasks that manage operating systems (Linux, macOS, and Windows) and processes. PowerShell is open-source, object-based and includes a rich expression parser and a fully developed scripting language with a gentle learning curve. The PowerShell Toolkit for SNIA Swordfish™ provides simple to use commands for managing any Swordfish Implementation (including the SNIA API Emulator). Attend this session to learn how to use the SNIA Swordfish PowerShell Module to jumpstart development of your own Swordfish implementation. Learning Objectives: 1) Provide an overview of the PowerShell open source tool kit; 2) Describe how the PowerShell tool kit can speed a Swordfish implementation; 3) Educate the audience on how to use and access the PowerShell tool kit.…
S
Storage Developer Conference

After a year of implementation progress of the The SMB3 .1.1 POSIX Extensions, a set of protocol extensions to allow for optimal Linux and Unix interoperability with NAS and Cloud file servers - what is the current status - what have we learned - what has changed in the protocol specification in the past year - what advice do we have for implementers - and users … These extensions greatly improve the experience for users of Linux. This presentation will review the state of the protocol extensions and their current implementation in the Linux kernel and Samba among others, and provide an opportunity for feedback and suggestions for additions to the POSIX extensions. This has been an exciting year with many improvements to the implementations of the SMB3.1.1 POSIX Extensions in Samba and Linux! Learning Objectives: 1) What is the current status of Linux interoperability with various SMB3.1.1 servers?; 2) How have the protocol extensions for Linux/POSIX progressed over the past year? What has changed? What works?; 3) What are suggestions for implementors of SMB3.1.1 servers?; 4) What is useful information for users to know to try these extensions?; 5) How do new Linux file system features map to these extensions?…
S
Storage Developer Conference

1 #124: Standardization for a Key-Value Interface underway at SNIA and NVM Express 52:11
52:11
Play Later
Play Later
Lists
Like
Liked52:11
NVMe KV (Key-Value) is an industry-wide proposal for a new command structure that allows access to data on an NVMe SSD controller using a “key” rather than a block address. Developed within the NVM Express technical working group, this Key Value command set provides a “key” to store a corresponding “value” on non-volatile media, then retrieves that “value” from the media by specifying the corresponding “key.” In addition to extensive work being undertaken by the NVM Express working group, the SNIA has completed an overarching KeyValue API released for a membership vote in January 2019. This presentation examines standardization efforts going on within SNIA and the NVM Express working group that will allow users to access key-value data without the costly and time-consuming overhead of additional translation tables between keys and logical blocks. Learning Objectives: 1) What is the status of standards development; 2) Overview of what is in the SNIA KV API; 3) Overview of what is in the NVMe KV proposal.…
S
Storage Developer Conference

A variety of persistent memory technologies with DRAM-class performance, known as “memory class storage” or “MCS”, have appeared on the horizon. MCS will change the architecture of future computing systems. These technologies include carbon nanotube memory, phase change memory, magnetic spin memory, and resistive memory, and each has unique characteristics that can complicate systems designed to exploit them. The JEDEC DDR5 NVRAM specification in process intends to bridge the differences between the technologies and provide systems designers with a unified specification for DRAM-class persistent memory. Nantero NRAM is a NVRAM based on carbon nanotube cell structures that provides a DDR4 or DDR5 interface to the system, and provides additional enhancements to yield 20% higher performance at the same clock rate. Learning Objectives: 1) Attendees are exposed to system level advantages of memory class storage devices that operate at DRAM speeds but provide data persistence; 2) JEDEC is working on a new specification to standardize the interface to a variety of NVRAMs which provide memory class storage; 3) Nantero NRAM is a memory class storage device with better than DRAM performance.…
S
Storage Developer Conference

1 #122: 10 Million I/Ops From a Single Thread 50:11
50:11
Play Later
Play Later
Lists
Like
Liked50:11
One of the most common benchmarks in the storage industry is 4KiB random read I/O per second. Over the years, the industry first saw the publication of 1M I/Ops on a single box, then 1M I/Ops on a single thread (by SPDK). More recently, there have been publications outlining 10M I/Ops on a single box using high performance NVMe devices and more than 100 CPU cores. This talk will present a benchmark of SPDK performing more than 10 million random 4KiB read operations per second from a single thread to 20 NVMe devices, a large advance compared to the state of the art of the industry. SPDK has developed a number of novel techniques to reach this level of performance, which will be outlined in detail here. These techniques include polling, advanced MMIO doorbell batching strategies, PCIe and DDIO considerations, careful management of the CPU cache, and the use of non-temporal CPU instructions. This will be a low level talk with real examples of eliminating data dependent loads, profiling last level cache misses, pre-fetching, and more. Additionally, there remains a number of techniques that have not yet been employed that warrant future research. These techniques often push devices outside of their original intended operating mode, while remaining within the bounds of the specification, and so often require collaboration between NVMe controller and device designers, the NVMe specification body, and software developers such as the SPDK team. Learning Objectives: 1) Optimal use of NVMe devices; 2) Optimal use of PCIe and MMIO in a storage stack; 3) Leveraging advanced x86-64 CPU instructions and making best use of the CPU cache.…
S
Storage Developer Conference

The applications using NVMe, SAS, SATA, USB based storage devices find a new use and one of them is mining for open source cryptocurrency such as Burst Coin. Using low power or solar power HDD’s, SSD and most importantly NVMe technology can improve turnaround latency and build blocks on a faster scale. Utilization of security protocols allows anonymization as well as protection of the users and vendors. Burst coin has an extensive developer’s community and can run on the cloud, has dApps, its own ATM and more. More importantly, Burst is based on Proof of Capacity protocol and utilizes storage drives, arrays and enables users to build the mesh net of miners and secure blockchain protocol. Using NVMe devices we can accelerate transactions. We will show how using performance analytics tools we can create predictions on building blockchain blocks and provide insights into date usage efficiency. Additional value benefits are saving energy costs, address new markets and create adoption in the larger markets. The usage of storage devices and blockchain will enable HW secure banking transactions (via smart contracts) and much more. Learning Objectives: 1) Learn how Proof of Capacity works with storage devices; 2) Find new applications for Storage applications; 3) Understand Data Science perception with Blockchain.…
S
Storage Developer Conference

1 #120: What Happens when Compute Meets Storage? 51:15
51:15
Play Later
Play Later
Lists
Like
Liked51:15
S
Storage Developer Conference

S
Storage Developer Conference

1 #118: Linux NVMe and Block Layer Status Update 46:47
46:47
Play Later
Play Later
Lists
Like
Liked46:47
S
Storage Developer Conference

1 #117: Developments in LTO Tape Hardware and Software 41:07
41:07
Play Later
Play Later
Lists
Like
Liked41:07
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.