2021
LUG 2021 was held as a web-based event, hosted by UF Information Technology (UFIT). The complete agenda, presentations, and videos are available here.
2020
LUG 2020 was converted to a web-based event. There were two webinars held, and the complete agenda, presentations, and videos are available here.
2019
LUG 2019 was held in Houston, TX, May 14-17, 2019. The complete agenda, presentations, and videos are available here.
2018
LUG 2018 was held in Chicago, IL, April 24-26, 2018. The complete agenda, presentations, and videos are available here.
2017
LUG 2017 was held in Bloomington, IN, May 31-June 2, 2017. The complete agenda, presentations, and videos are available here.
2016
LUG 2016 was held in Portland, OR, April 5-7, 2016. The complete agenda, presentations, and videos are available here.
2015
LUG 2015 was held in Denver, CO, April 13-15, 2015. The complete agenda, presentations, and videos are available here.
2014
LUG 2014
LUG 2014 was held in Miami, Florida on April 8-10, 2014. The complete agenda, presentations, and videos are here.
2013 and Earlier
HP-CAST
The HP Consortium for Advanced Scientific and Technical Computing (HP-CAST) was held November 15-16 in Denver, CO, immediately prior to SC13. OpenSFS-HP-CAST-21, Mike Vildibill, DDN
LUG 2013
Tuesday, April 16, 2013 | |
8:00 am – 8:10 am | Welcome Remarks |
8:10 am – 8:45 am | OpenSFS Update Norman Morse & Galen Shipman, OpenSFS |
8:45 am – 9:20 am | EOFS Update Hugo Falter, EOFS |
9:20 am – 9:30 am | Xyratex Update Kevin Canady, Xyratex |
9:30 am – 10:00 am | Lustre Releases Peter Jones, Intel |
10:00 am – 10:20 am | Break |
10:20 am – 10:40 am | Layout Lock Jinshan Xiong, Intel |
10:40 am – 11:00 am | Distributed Namespace (DNE) Di Wang, Intel |
11:00 am – 11:20 am | Software vs. Hardware RAID and Implications for the Future Alan Poston, Xyratex |
11:20 am – 11:40 am | Sequoia and the ZFS OSD Christopher Morrone, Lawrence Livermore National Laboratory |
11:40 am – 12:00 pm | ZFS and Lustre go to Hollywood Josh Judd, Warp Mechanics |
12:00 pm – 1:00 pm | Lunch |
1:00 pm – 1:20 pm | Lustre Contribution Model Nathan Rutman, Xyratex |
1:20 pm – 1:40 pm | The State of the Lustre File System and The Lustre Development Ecosystem: A 2013 Report Card Dave Fellinger, DataDirect Networks |
1:40 pm – 2:00 pm | Lustre Acquisition and Its Future Peter Bojanic, Xyratex |
2:00 pm – 2:40 pm | OpenSFS CDWG Meeting Chris Morrone, OpenSFS |
2:40 pm – 3:00 pm |
The Lustre Community in Europe Torben Kling Petersen Phd, Xyratex
|
3:00 pm – 3:30 pm |
Break
|
3:30 pm – 3:50 pm | Lustre Tuning Parameters Bobbie Lind, Intel |
3:50 pm – 4:10 pm | Hadoop-Lustre Omkar Kulkarni, Intel |
4:10 pm – 4:30 pm | An Intro to Ceph for HPC Sage Weil, Inktank |
4:30 pm – 4:50 pm | Lustre on Amazon Web Services Robert Read, Intel |
4:50 pm – 6:00 pm | Vendor Presentations: Aeon Computing, Bull, Cray, DDN, EMC, Intel, NetApp, Warp Mechanics, Xyratex |
6:00 pm |
Adjourn
|
Wednesday, April 17, 2013 | |
8:00 am – 8:30 am | Lustre 2.5 and Beyond Andreas Dilger, Intel |
8:30 am – 8:50 am | DAOS Changes to Lustre Johann Lombardi, Liang Zhen, Intel |
8:50 am – 9:10 am | Online LFSCK Alexey Zhuravlev, Intel |
9:10 am – 10:00 am | OpenSFS TWG Meeting David Dillow & John Carrier, OpenSFS |
10:00 am – 10:20 am | Break |
10:20 am – 10:40 am | User-Defined Transport Protocols for Lustre Eric Kinzie (ITT Exelis) / Linden Mercer (Penn State), NRL |
10:40 am – 11:00 am | Exploring Multiple Interface Lustre Performance from a Single Client Jim Karellas and Mahmoud Hanafi, NASA Ames Research Center |
11:00 am – 11:20 am | Active-active LNET Bonding Using Multiple LNETs and Infiniband partitions Shuichi Ihara, DataDirect Networks |
11:20 am – 11:40 am | Getting the Most Out of Lustre with the TCP LND Blake Caldwell, Oak Ridge National Laboratory |
11:40 am – 12:00 pm | Wireshark Doug Oucharek, Intel |
12:00 pm – 1:00 pm | Lunch |
1:00 pm – 2:00 pm | Lustre in Commercial Space Panel |
2:00 pm – 2:20 pm | Lustre Static Code Analysis with Coverity Sebastien Buisson, Bull |
2:20 pm – 2:40 pm | Lustre Test and Validation Toolset Chris Gearing, Intel |
2:40 pm – 3:00 pm | Lustre Manual Richard Henwood, Intel |
3:00 pm – 3:30 pm | Break |
3:30 pm – 3:50 pm | Overcoming Gemini LNET Performance Hurdles David Dillow and James Simmons, Oak Ridge National Laboratory |
3:50 pm – 4:10 pm | Performance & Functionality Testbed for Clustered Filesystems: Lustre and some of its friends Giuseppe Bruno, Bank of Italy |
4:10 pm – 4:30 pm | A New Metric for File System Load Andrew Uselton, Lawrence Berkeley National Laboratory |
4:30 pm – 4:50 pm | HPCS Scenarios Update John Carrier, Cray |
4:50 pm – 5:50 pm | OpenSFS BWG Meeting Sarp Oral, OpenSFS |
5:50 pm |
Adjourn
|
Thursday, April 18, 2013 | |
8:00 am – 8:20 am | The next-generation 1 TB/s Spider file system at OLCF Sarp Oral, Oak Ridge National Laboratory |
8:20 am – 8:40 am | High Availability in Lustre – Enterprise RAS John Fragalla, Xyratex |
8:40 am – 9:00 am | Managing and Monitoring a Scalable Lustre Infrastructure Makia Minich, Xyratex |
9:00 am – 9:20 am | Robinhood Policy Engine Aurelien Degremont, CEA |
9:20 am – 9:40 am | Sequoia Data Migration Experiences Marc Stearman, Lawrence Livermore National Laboratory |
9:40 am – 10:00 am | Lustre HSM & Cloud: An Open Discussion on the Future of Lustre HSM, Storage Tiering, File/Block/Object Backing Stores & Lustre File Geo-Distribution Ashley Pittman and Dan Maslowski, DataDirect Networks |
10:00 am – 10:30 am | Break |
10:30 am – 11:00 am | OpenSFS General Assembly Terri Quinn & Tommy Minyard, OpenSFS |
11:00 am – 11:20 am | Fujitsu Contributions to Lustre Shinji Sumimoto, Oleg Drokin, Fujitsu/Intel |
11:20 am – 11:40 am | Using Changelogs for Efficient Search and Content Discovery Ashley Pittman, DataDirect Networks |
11:40 am – 12:00 pm | A Need for a DNS-like feature for LNet NIDs Doug Oucharek, Intel |
12:00 pm – 1:00 pm | Lunch |
1:00 pm – 1:30 pm | Next Generation Storage Architectures for Exascale Mark Seager, Intel |
1:30 pm – 2:00 pm | EIOW – a Framework for Exa-Scale I/O Meghan McClelland, Xyratex |
2:00 pm – 2:30 pm | Performance Evaluation of FEFS on K Computer and Fujitsu’s Roadmap toward Lustre 2.x Shinji Sumimoto, Fujitsu |
2:30 pm – 3:00 pm | Lustre – Fast Forward to Exascale Eric Barton, Intel |
3:00 pm – 3:30 pm |
Break
|
3:30 pm – 4:30 pm | Lustre and Beyond Panel |
4:30 pm – 4:50 pm | Lustre at FNAL Alex Kulyavtsev, Fermilab |
4:50 pm – 5:10 pm | Lustre at BlueWaters Nathan Rutman, Xyratex |
5:10 pm – 5:30 pm | SDSC’s Data Oasis: Balanced Performance and Cost-Effective Lustre File Systems Rick Wagner and Jeff Johnson, San Diego Supercomputer Center / Aeon Computing |
5:30 pm – 5:50 pm | The Madness of Project George Ben Evans, Terascala |
5:50 pm |
End of LUG 2013
|
SC12
- Lustre 2.4 and Beyond Andreas Dilger, Software Architect, Intel High Performance Data Division, November 14, 2012
- Biology on a National Scale Parallel File System Richard LeDuc, Manager, National Center for Genome Analysis Support, Indiana University, November 13, 2012
- OpenSFS Community Development Working Group – Bringing the Lustre Community Together Pamela Hamilton, Group Leader, Software Development Group, Lawrence Livermore National Laboratory, November 12, 2012
LUG 2012
- EOFS Update, Hugo Falter, EOFS
- OpenSFS Update, Galen Shipman, OpenSFS
- Lustre Releases, Peter Jones, Whamcloud
- Lustre Future Development, Andreas Dilger, Whamcloud
- Lustre Network Checksum Performance Improvements, Nathan Rutman, Xyratex
- OSD Restructuring Project Status, Alex Zhuravlev, Whamcloud
- Sequoia’s 55PB Lustre+ZFS Filesystem, Brian Behlendorf, Lawrence Livermore National Laboratory
- Distributed Namespace Phase I Status, Wang Di, Whamcloud
- Network Request Scheduler Scale Testing Results, Nikitas Angelinas, Xyratex
- Lustre Quality, Chris Gearing, Whamcloud
- Lustre 2.1 and Lustre-HSM at IFERC, Diego Moreno, Bull
- Deploying a Lustre File System for the HPC Platform of the Research Area of the Bank of Italy, Giuseppe Bruno, Bank of Italy
- Leveraging Lustre to Address the I/O Challenges of Exascale, Eric Barton, Whamcloud
- LNET Routing Enhancements and Extracting Maximum Performance, Isaac Huang and David Dillow, Xyratex and Oak Ridge National Laboratory
- Lustre Ping Evictor Scaling in LNET Fine Grained Routing Configurations, Cory Spitz and Nic Henke, Cray
- LNET Support for IPV6 is Long Overdue, Isaac Huang, Xyratex
- Using Kerberized Lustre over Wide Area for High Energy Physics Data, Josephine Palencia, Pittsburgh Supercomputing Center
- Secure Identity Mapping for Lustre 2.X, Joshua Walgenbach, Indiana University
- How to Tune Your Wide Area File System for a 100 Gbps Network, Scott Michael, Indiana University
- Installation of LLNL’s Sequoia File System, Marc Stearman, Lawrence Livermore National Laboratory
- Current Status of FEFS for the K Computer, Shinji Sumimoto, Fujitsu
- Lustre as Data Acquisition File System at Diamond Light Source, Frederik Ferner, Diamond Light Source
- Lustre Performance Analysis with SystemTap, Jason Rappley, NASA
- Testing and Debugging with MDSim, Alexey Lyashkov, Xyratex
- MDS Survey, Oleg Drokin, Whamcloud
- HPCS I/O Scenarios, John Carrier, Cray
- A Technical Overview of the OLCF’s Next-generation Center-wide Lustre File System, Sarp Oral, Oak Ridge National Laboratory
- Optimizing Lustre Performance Using Stripe-aware Tools, Paul Kolano, NASA
- High Availability Lustre Using SRP-mirrored LUNs, Charles Taylor, University of Florida
- Lustre Automation Challenges, John Spray, Whamcloud
- Best Practices for Scalable Administration of Lustre, Blake Caldwell, Oak Ridge National Laboratory
Lustre Open-Benchmarking (December 8, 2011)
- HPCS IO Scenarios for OBL, John Carrier
- OBL for SC11
SC11 OpenSFS/EOFS
- OpenSFS SC11 Speaker Schedule
- Lustre Hands On, Andreas Dilger
- Single MDS Performance, Liang Zhen
- TWG Meeting, Dave Dillow and John Carrier
- Considerations for Exascale File Systems (Open Source Parallel File Systems – Transitioning from Petascale to Exascale), Paul Nowoczynski
- Big Data Challenges In Leadership Computing (SC 2011 DDN Lunch Talk), Galen Shipman
- Exascale I:O challenges (Open Source Parallel File Systems – Transitioning from Petascale to Exascale), Eric Barton
- Exascale Challenges (Open Source Parallel File Systems – Transitioning from Petascale to Exascale), Sage Weil
- Overcoming Roadblocks to Exascale Storage (Open Source Parallel File Systems – Transitioning from Petascale to Exascale), Rob Ross
- SC11 OpenSFS Update, Galen Shipman
- CDWG, Pam Hamilton
- Rock Hard Lustre, Nathan Rutman
LUG 2011
- LUG Kickoff, Galen Shipman, LUG Committee Chair, ORNL
- Lustre Community Releases, Peter Jones, Whamcloud
- Architecture and Implementation of Lustre at the National Climate Computing Research Center, Douglas Fuller, ORNL
- The Scientific User’s Perspective of Lustre, Frank Indiviglio, NOAA
- Effective MapReduce on Lustre, Nathan Rutman, Xyratex
- The Statistical Properties of Lustre Server-side I/O, Andrew Uselton, NERSC
- Testing Methodology for Large Scale File Systems, Sarp Oral, ORNL
- Whamcloud Test and Validation Toolset, Chris Gearing, Whamcloud
- Cloud Infrastructure for Lustre QA and Benchmark, Shuichi Ihara, Data Direct Networks
- An Overview of Fujitsu’s Lustre Based File System, Shinji Sumimoto, Fujitsu
- Empowering Multi-site Heterogeneous Workflows with Lustre-WAN, Scott Michael, Indiana University
- Lustre WAN at 100 Gigabit, Michael Kluge, ZIH (large file, not available through website)
- Community Organization: EOFS
- Community Organization: HPCFS
- Community Organization: OpenSFS
- OpenSFS Lustre Architecture Roadmap, David Dillow, OpenSFS
- Online File System check, Andreas Dilger, Whamcloud
- ZFS on Linux for Lustre, Brian Behlendorf, LLNL
- brtfs: Overview and Requirements for a btrfs osd, Johann Lombardi, Whamcloud
- Stability and Performance Analysis of brtfs, Douglas Fuller, ORNL
- Lustre File Creation Performance Enhancements, Ben Evans, Terascala
- Lustre Metadata Operations Improvements, Fan Yong, Whamcloud
- Detecting Hidden File System Problems, Nick Cardo, NERSC (large file, not available through website)
- Log Analysis and lltop, John Hammond, TACC
- Lustre Monitoring Tool (LMT), Chris Morrone, LLNL
- Lustre at Juelich Supercomputing Center (JSC), Frank Heckes, JSC
- Improving Management of Large Lustre File Systems, David Dillow, ORNL
- Lustre/HSM binding, Aurelien Degremont, CEA
- Lustre as a root file system, Robin Humble, NCINF
- Lustre 2.0 in NUMIOA architectures, Diego Moreno, Bull
- A Scalable Health Network for Lustre, Eric Barton, Whamcloud
- Imperative Recovery, Jinshan Xiong, Whamcloud
- Lustre File Striping across a Large Number of OSTs, Oleg Drokin, Whamcloud
- Lustre use cases in the TSUBAME2.0 supercomputer, Hitoshi Sato, Tokyo Institute of Technology (large file, not available through website)