OpenSFS News

OpenSFS Test Cluster Donation

By jfranklin

OpenSFS is looking to donate a smallish cluster (click here for a PDF with a description of the hardware) that previously supported testing and development of the open-source parallel Lustre file system. While this is still a useful system, it’s no longer within OpenSFS’s scope to operate, so we are looking to donate it to a college or university to support activities aligned with our mission to promote innovation and adoption of open-source scalable storage technologies. There are many potential use cases for a system like this including teaching, training, professional development, software research, or production use as an HPC cluster.

The cluster is wholly owned by OpenSFS and can be transferred as an asset to the receiving campus. The cluster is currently at Indiana University at their Bloomington campus. The receiving campus will be responsible for packaging and transporting the cluster. Indiana University will remove the cluster from their data center and place it on the loading dock for packaging, un-cabling, etc. The racks are included and the cluster could be transported in those two units.

Interested colleges or universities should send a one page proposal of how they would utilize this system. The proposal must include a primary point of contact, i.e., name, email, mailing address, and phone number. The proposals will be reviewed by the OpenSFS board based on the following criteria:

● Growing the open-source community
● Researching file system and storage technologies
● Supporting traditionally underserved students and staff

Membership in OpenSFS will not be a factor in the decision. Proposals are due by midnight PST, February 9, 2018. The receiving organization will be granted a speaking slot at the upcoming LUG 2018 in Chicago at Argonne National Laboratory to describe their plan for the cluster. Two free LUG 2018 registrations will be included (transportation and housing not included). The proposals and any questions should be sent to [email protected]

The Board of OpenSFS

Lustre 2.10.2 Released

By jfranklin

We are pleased to announce that the Lustre 2.10.2 Release has been declared GA and is available for download. You can also grab the source from git.

Along with a number of useful bug fixes, this maintenance release includes the following notable enhancement over 2.10.1:

  • ZFS 0.7.3 is now the default version of ZFS used for the release (LU-10150)

Details of changes since 2.10.1 can be found in the 2.10.2 change log.

Please log any issues found in the issue tracking system.

Thanks to all those who have contributed to the creation of this release.

We are expecting to release Lustre 2.10.3 during Q1 of next year.

SC17 Lustre Community BoF

By jfranklin

Attending SC17? Please join us at the Lustre Community Birds of a Feather (BoF) session!


5:15 p.m. to 5:20 p.m.
Welcome, Agenda, Introduction
Topic: State of Lustre and Its Community, Frank Baetke, Sarp Oral


5:30 p.m. to 5:35 p.m.
Upcoming Lustre Community Events, Frank Baetke, Sarp Oral


5:35 p.m. to 5:40 p.m.
Legal Aspects of Lustre, Hugo Falter


5:40 p.m. to 5:50 p.m.
The Lustre Roadmap, Peter Jones


5:50 p.m. to 7:00 p.m.
Technical Discussion, Andreas Dilger, Frank Baetke, Sarp Oral


PTI Draws Global Attendees to Lustre User Group Meeting

By jfranklin

Users of the supercomputing file system software connect with colleagues, elect officers to OpenSFS board, including IU’s own Ken Rawlings as secretary

Original article available via Indiana University

Proponents of the Lustre file system, which powers many of the world’s fastest supercomputers, recently gathered last month on the Indiana University Bloomington campus for the annual Open Scalable File Systems Lustre User Group, or LUG, meeting.

LUG is the premier event for the Lustre community and brings together developers, system architects and administrators, and users from all around the world to discuss the current status and future roadmap of Lustre. 2017 has been a transitional year for OpenSFS, the organization dedicated to the success of the Lustre file system, as it moves to a user-led, nonprofit model. In fact, this year marks the first time the LUG meeting was hosted by a user institution.

I’m quite dedicated to Lustre, and am looking forward to helping it become an even more vibrant ecosystem for the global high performance computing community. Ken Rawlings, IU senior systems analyst, on his election to the OpenSFS Board

“We were honored to host LUG17 and its nearly 200 attendees, who traveled to Bloomington from 13 countries and more than 70 institutions,” said Stephen Simms, former OpenSFS president and manager of the High Performance File Systems group at Indiana University. “Everyone at IU did their best to create a successful conference with ample time to connect with colleagues, professionally and socially.”

In addition to workshops and presentations, LUG17 featured an opening reception at IU’s Cyberinfrastructure Building sponsored by DDN, an Intel-sponsored dinner and a movie showing of the 1979 film “Breaking Away” (which was filmed in Bloomington), and a pub crawl sponsored by HGST/WARP.

Meet the OpenSFS Board

Meet the OpenSFS Board

The newly elected OpenSFS Board members and officers are (left to right): Kevin Harms (Argonne National Laboratory), vice president; Ken Rawlings (Indiana University), secretary; Shawn Hall (BP), director at large; Rick Wagner (Globus), treasurer; and Sarp Oral (Oak Ridge National Laboratory), president.

“LUG2017 was a smashing success, thanks to the flawless coordination between IU and OpenSFS,” said Sarp Oral, president of OpenSFS. “The event brought Lustre system architects, developers, and administrators from all around the world, and it was very well received by the attendees. As the president of OpenSFS and on behalf of the Lustre community, I would like to express my sincere gratitude and thanks to the IU staff who made this event a true success.”

OpenSFS Board elections were also held as part of LUG17. These elections concluded the re-organization and transition of OpenSFS as a Lustre user-driven organization. Newly elected OpenSFS Board members and officers include:

  • Sarp Oral (Oak Ridge National Laboratory) as president
  • Kevin Harms (Argonne National Laboratory) as vice president
  • Ken Rawlings (Indiana University) as secretary
  • Rick Wagner (Globus) as treasurer
  • Shawn Hall (BP) as the director at large

IU’s Ken Rawlings is a senior systems analyst in the High Performance File Systems group in the university’s Research Technologies division. In his new role as secretary of the OpenSFS board, he will be responsible for general records management including maintaining meeting documents and creating detailed reports.

“It’s an honor to be able to serve the community in this way,” said Rawlings. “I’m quite dedicated to Lustre, and am looking forward to helping it become an even more vibrant ecosystem for the global high performance computing community.”

The LUG17 agenda and presentation materials and videos are now available at


Lustre 2.10.0 Released

By jfranklin

We are pleased to announce that the Lustre 2.10.0 Release has been declared GA and is available for download. You can also grab the source from git.

This major release includes new features:
  • Progressive File Layouts: Enables file layouts to automatically adjust as size of files grow thus optimizing performance for diverse workloads (LU-8898)
  • Multi-Rail LNet: Allows LNet to utilize multiple network interfaces on a node in parallel, aggregating their performance (LU-7734)
  • Project Quotas: Extension to the Lustre quotas feature to provide the option to place quotas on a per project basis rather than just per user or per group (LU-4017)
  • Simplified Userspace Snapshots: Provides a mechanism to leverage the snapshot capability in OpenZFS to take a coordinated snapshot of a Lustre filesystem. (LU-8900)
  • NRS Delay Policy: Simulates high server load as a way of validating the resilience of Lustre under load. (LU-6283)
Fuller details can be found in the 2.10 wiki page (including the changelog and test matrix).

Please log any issues found in the issue tracking system

Thanks to all those who have contributed to the creation of this release.

This is the first release of the 2.10.x LTS release stream. A freely available Lustre 2.10.1 release is planned in the coming weeks

Choose Lustre and attend LUG17 at IU

By jfranklin

Regular Registration will be closing May 1 for the Lustre User Group 2017 conference, May 30-June 2, 2017. There is significant savings for registering by this date. You can see the full conference agenda and register at .

You don’t want to miss this year’s conference, featuring the latest Lustre developments and special events for connecting with your colleagues – like an opening reception sponsored by DDN at IU’s Cyberinfrastructure Building, Dinner and a Movie (Breaking Away) at the historical IU Auditorium sponsored by Intel, and a pub crawl sponsored by HGST/WARP.

Registration will be capped at 200 attendees.  Only a few late registrations will be accepted – and only if capacity has not been reached.

The hotel block for LUG 2017 attendees will expire soon.  A few rooms are still available at the IMU, but be sure to book by the hotel reservation deadline April 25, 2017 at 11:59pm EST.
Questions about the conference? Contact [email protected]

We look forward to hosting you in Bloomington!

About PTI
The Pervasive Technology Institute at Indiana University is a world-class organization dedicated to the development and delivery of innovative information technology to advance research, education, industry and society. Since 2000, PTI has received more than $50 million from the National Science Foundation to advance the nation’s research cyberinfrastructure. Established by a major grant from the Lilly Endowment, the Pervasive Technology Institute brings together researchers and technologists from a range of disciplines and organizations, including the IU School of Informatics and Computing, the IU Maurer School of Law and the College of Arts and Sciences at Bloomington and University Information Technology Services at Indiana University.

3rd International Workshop on the Lustre Ecosystem: Support for Deepening Memory and Storage Hierarchy

By jfranklin

Hanover, Maryland
July 25-26, 2017

The Lustre parallel file system has been widely adopted by the high-performance computing (HPC) systems as an effective mechanism for managing large-scale storage and I/O resources. Lustre is an open-source parallel file system technology and heavily used by the world’s fastest HPC systems. Lustre achieves unprecedented aggregate performance by parallelizing I/O over file system clients and storage targets at extreme scales. Large-scale checkpoint storage and retrieval, which is characterized by bursty I/O from coordinated parallel clients, has been the primary driver of Lustre development over the last decade.

With the introduction of non-volatile storage technologies, many HPC centers are seeing a proliferation of I/O layers in the end-to-end storage hierarchy that place new demands on Lustre. Effectively managing the node-local memory and these new layers is a new challenge for Lustre and requires new technologies and data management policies to be developed to effectively handle data storage and movement across the I/O stack.

In July of 2017, the 3rd International Workshop on the Lustre Ecosystem will be held in Hanover, Maryland. This workshop series is intended to help explore improvements in the performance, flexibility, and usability of Lustre for supporting diverse application workloads and diverse HPC architectures. The past workshops have helped culminate a discussion on the open challenges associated with enhancing Lustre for diverse applications and architectures, the technological advances necessary, and the associated impacts to the Lustre ecosystem. The 3rd International Lustre Ecosystems Workshop will present a series of invited talks from industry, academia, and US National Laboratories focusing on:

  • Lustre Node-Local Memory Management
  • Multilayered Lustre Storage Architectures
  • Data Flow in Lustre across Multiple I/O Stacks
  • Data Management and Handling in Lustre across Multiple I/O Stacks
  • Data Resiliency and Replication Mechanisms in Lustre across Multiple I/O Stacks
  • Data Provenance in Lustre across Multiple I/O Stacks

See the conference information at

LUG17 abstracts due 2/26 – early bird ends 3/15

By jfranklin

Want to present at LUG? Be sure to submit your abstract by the February 26th deadline: Accepted abstracts will be notified in time to take advantage of early bird registration.

We have a great week planned – including a range of presentations on best practices and boundary-pushing deployments as well as a hackathon.

Networking opportunities abound with an opening reception sponsored by DDN, Dinner and a Show sponsored by Intel, and a pub crawl sponsored by WARP/HGST.

See the conference overview at

Time is running out for LUG17 early bird registration! Register now at to take advantage of the early bird rate of $499. After March 15, fees increase substantially.

Questions about the conference? Contact [email protected]

We look forward to seeing you in Bloomington!

LUG 2017 – Call for Presentations

By jfranklin

The Lustre User Group (LUG) conference is the industry’s primary venue for discussion and seminars on the Lustre parallel file system.

This year’s conference is being held in Bloomington, Indiana from May 30-June 2nd, 2017. For more information, visit the LUG 2017 web page.
The LUG Planning Committee is particularly seeking presentations on:

  • experiences running the newer community releases – 2.8 and 2.9 – in production
  • experiences using the new features in those releases (DNE2, SSK, UID/GID mapping, etc.)
  • best practices and practical experiences in deploying, monitoring and operating Lustre
  • pushing the boundaries with non-traditional deployments

For more information, or to submit an abstract, please visit the Call for Presentations web page.

Lustre 2.9 Released

By jfranklin

We are pleased to announce that the Lustre 2.9.0 Release has been declared GA and is available for download . You can also grab the source from git

This major release includes new features:

  • UID/GID Mapping  Provides the capability to remap user and group IDs based on node address so as to simplify administration when sharing access between security domains   (LU-3291)
  • Shared Key Crypto Identifies clients/servers and authenticates/encrypts RPCs  (LU-3289)
  • Subdirectory Mounts  Offers the option for clients to be able to mount only a defined subset of the filesystem (LU-29)
  • Server Side Advise and Hinting A Lustre equivalent of fadvise that provides hints to the server about the nature of the data access (willread, dontreed) so that appropriate steps can be taken. (LU-4931)
  • Large Bulk IO Allows the configuration of larger options to be able to get more efficient network/disk IO . (LU-7990)
  • Weak Updates Lustre support for weak updates (KMP) has been added. Please note the associated name changes for the RPMs . (LU-5614)

Fuller details can be found in the 2.9 wiki page (including the change log and test matrix)

The following are known issues in the Lustre 2.9 Release:

  • LU-8313 – Deployments using kerberos that have upgraded from an earlier release will need to manually add a–k flag to svcgssd in order to maintain compatibility
  • LU-8880 – Kerberos does not behave in a consistent manner when deploying in a DNE configuration

These issues are being actively worked and have proposed fixes under testing and review.

Please log any issues found in the issue tracking system 

Thanks to all those who have contributed to the creation of this release.