词条 | Lustre (file system) |
释义 |
| name = Lustre | title = | logo = Lustre file system logo.gif | logo caption = | logo_size = 200px | logo_alt = | screenshot = | caption = | screenshot_size = | screenshot_alt = | collapsible = | author = | released = {{Start date and age|2003|12|16}}[1] | discontinued = | latest release version = 2.12.0 (latest major release), [2] 2.10.6 (latest maintenance release), [2] | latest release date = {{Start date and age|2018|12|21}} | latest preview version = 2.12.51 | latest preview date = {{Start date and age|2019|02|04}} | repo = {{URL|1=https://git.whamcloud.com/?p=fs/lustre-release.git}} | status = | programming language = C | operating system = Linux kernel | platform = | size = | language = | language count = | language footnote = | genre = Distributed file system | license = GPL v2 | website = {{URL|lustre.org}} }}{{Infobox company | name = Cluster File Systems, Inc. | logo = File:Cluster File Systems Inc. logo.gif | type = Private | foundation = 2001 | founder = Peter J. Braam | location_city = Boulder, Colorado | location_country = | key_people = Phil Schwan, Eric Barton (HPC), Andreas Dilger | products = Lustre file system }}{{ infobox file system | name = Lustre | developer = | full_name = | introduction_date = December, 2003 | introduction_os = Linux | partition_id = | directory_struct = Hash, Interleaved Hash with DNE in 2.7+ | file_struct = | file_types = | bad_blocks_struct = | bootable = No | min_volume_size = 32 MB | max_volume_size = 100 PB (production), over 16 EB (theoretical) | max_file_size = 3.2 PB (ext4), 16 EB (ZFS) | file_size_granularity = 4 KB | max_files_no = Per Metadata Target (MDT): 4 billion files (ldiskfs backend), 256 trillion files (ZFS backend),[3] up to 128 MDTs per filesystem | max_filename_size = 255 bytes | max_dirname_size = 255 bytes | max_directory_depth = 4096 bytes | filename_character_set = All bytes except NUL ('\\0') and '/' and the special file names "." and ".." | dates_recorded = modification (mtime), attribute modification (ctime), access (atime), delete (dtime), create (crtime) | date_range = 2^34 bits (ext4), 2^64 bits (ZFS) | date_resolution = 1 s | forks_streams = No | attributes = 32bitapi, acl, checksum, flock, lazystatfs, localflock, lruresize, noacl, nochecksum, noflock, nolazystatfs, nolruresize, nouser_fid2path, nouser_xattr, user_fid2path, user_xattr | file_system_permissions = POSIX, POSIX.1e ACL, SELinux | compression = Yes (ZFS only) | encryption = Yes (network only) | data_deduplication = Yes (ZFS only) | copy_on_write = Yes (ZFS only) | OS = Linux kernel }} Lustre is a type of parallel distributed file system, generally used for large-scale cluster computing. The name Lustre is a portmanteau word derived from Linux and cluster.[4] Lustre file system software is available under the GNU General Public License (version 2 only) and provides high performance file systems for computer clusters ranging in size from small workgroup clusters to large-scale, multi-site clusters. Because Lustre file systems have high performance capabilities and open licensing, it is often used in supercomputers. Since June 2005, it has consistently been used by at least half of the top ten, and more than 60 of the top 100 fastest supercomputers in the world,[5][6][7] including the world's No. 2 and No. 3 ranked TOP500 supercomputers in 2014, Titan and Sequoia.[8][9] Lustre file systems are scalable and can be part of multiple computer clusters with tens of thousands of client nodes, tens of petabytes (PB) of storage on hundreds of servers, and more than a terabyte per second (TB/s) of aggregate I/O throughput.[10][11] This makes Lustre file systems a popular choice for businesses with large data centers, including those in industries such as meteorology, simulation, oil and gas, life science, rich media, and finance.[12] HistoryThe Lustre file system architecture was started as a research project in 1999 by Peter J. Braam, who was on the staff of Carnegie Mellon University (CMU) at the time. Braam went on to found his own company Cluster File Systems in 2001,[13] starting from work on the InterMezzo file system in the Coda project at CMU.[14] Lustre was developed under the Accelerated Strategic Computing Initiative Path Forward project funded by the United States Department of Energy, which included Hewlett-Packard and Intel.[15] In September 2007, Sun Microsystems acquired the assets of Cluster File Systems Inc. including its intellectual property.[16][17] Sun included Lustre with its high-performance computing hardware offerings, with the intent to bring Lustre technologies to Sun's ZFS file system and the Solaris operating system. In November 2008, Braam left Sun Microsystems, and Eric Barton and Andreas Dilger took control of the project. In 2010 Oracle Corporation, by way of its acquisition of Sun, began to manage and release Lustre. In December 2010, Oracle announced they would cease Lustre 2.x development and place Lustre 1.8 into maintenance-only support creating uncertainty around the future development of the file system.[18] Following this announcement, several new organizations sprang up to provide support and development in an open community development model, including Whamcloud,[19] Open Scalable File Systems, Inc. (OpenSFS), EUROPEAN Open File Systems (EOFS) and others. By the end of 2010, most Lustre developers had left Oracle. Braam and several associates joined the hardware-oriented Xyratex when it acquired the assets of ClusterStor,[20][21]while Barton, Dilger, and others formed software startup Whamcloud, where they continued to work on Lustre.[22] In August 2011, OpenSFS awarded a contract for Lustre feature development to Whamcloud.[23] This contract covered the completion of features, including improved Single Server Metadata Performance scaling, which allows Lustre to better take advantage of many-core metadata server; online Lustre distributed filesystem checking (LFSCK), which allows verification of the distributed filesystem state between data and metadata servers while the filesystem is mounted and in use; and Distributed Namespace Environment (DNE), formerly Clustered Metadata (CMD), which allows the Lustre metadata to be distributed across multiple servers. Development also continued on ZFS-based back-end object storage at Lawrence Livermore National Laboratory.[9] These features were in the Lustre 2.2 through 2.4 community release roadmap.[24] In November 2011, a separate contract was awarded to Whamcloud for the maintenance of the Lustre 2.x source code to ensure that the Lustre code would receive sufficient testing and bug fixing while new features were being developed.[25] In July 2012 Whamcloud was acquired by Intel,[26][27] after Whamcloud won the FastForward DOE contract to extend Lustre for exascale computing systems in the 2018 timeframe.[28] OpenSFS then transitioned contracts for Lustre development to Intel. In February 2013, Xyratex Ltd., announced it acquired the original Lustre trademark, logo, website and associated intellectual property from Oracle.[20] In June 2013, Intel began expanding Lustre usage beyond traditional HPC, such as within Hadoop.[29] For 2013 as a whole, OpenSFS announced request for proposals (RFP) to cover Lustre feature development, parallel file system tools, addressing Lustre technical debt, and parallel file system incubators.[30] OpenSFS also established the Lustre Community Portal, a technical site that provides a collection of information and documentation in one area for reference and guidance to support the Lustre open source community. On April 8, 2014, Ken Claffey announced that Xyratex/Seagate was donating the [https://lustre.org] domain back to the user community,[31] and this was completed in March, 2015. In June 2018, the Lustre team and assets were acquired from Intel by DDN. DDN organized the new acquisition as an independent division, reviving the Whamcloud name for the new division.[32] Release historyA Lustre file system was first installed for production use in March 2003 on the MCR Linux Cluster at Lawrence Livermore National Laboratory,[33] one of the largest supercomputers at the time.[34] Lustre 1.0.0 was released in December 2003,[1] and provided basic Lustre filesystem functionality, including server failover and recovery. Lustre 1.2.0, released in March 2004, worked on Linux kernel 2.6, and had a "size glimpse" feature to avoid lock revocation on files undergoing write, and client side data write-back cache accounting (grant). Lustre 1.4.0, released in November 2004, provided protocol compatibility between versions, could use InfiniBand networks, and could exploit extents/mballoc in the ldiskfs on-disk filesystem. Lustre 1.6.0, released in April 2007, allowed mount configuration (“mountconf”) allowing servers to be configured with "mkfs" and "mount", allowed dynamic addition of object storage targets (OSTs), enabled Lustre distributed lock manager (LDLM) scalability on symmetric multiprocessing (SMP) servers, and provided free space management for object allocations. Lustre 1.8.0, released in May 2009, provided OSS Read Cache, improved recovery in the face of multiple failures, added basic heterogeneous storage management via OST Pools, adaptive network timeouts, and version-based recovery. It was a transition release, being interoperable with both Lustre 1.6 and Lustre 2.0.[35] Lustre 2.0, released in August 2010, was based on significant internally restructured code to prepare for major architectural advancements. Lustre 2.x clients cannot interoperate with 1.8 or earlier servers. However, Lustre 1.8.6 and later clients can interoperate with Lustre 2.0 and later servers. The Metadata Target (MDT) and OST on-disk format from 1.8 can be upgraded to 2.0 and later without the need to reformat the filesystem. Lustre 2.1, released in September 2011, was a community-wide initiative in response to Oracle suspending development on Lustre 2.x releases.[36] It added the ability to run servers on Red Hat Linux 6 and increased the maximum ext4-based OST size from 24 TB to 128 TB,[37] as well as a number of performance and stability improvements. Lustre 2.1 servers remained inter-operable with 1.8.6 and later clients. Lustre 2.2, released in March 2012, focused on providing metadata performance improvements and new features.[38] It added parallel directory operations allowing multiple clients to traverse and modify a single large directory concurrently, faster recovery from server failures, increased stripe counts for a single file (across up to 2000 OSTs), and improved single-client directory traversal performance. Lustre 2.3, released in October 2012, continued to improve the metadata server code to remove internal locking bottlenecks on nodes with many CPU cores (over 16). The object store added a preliminary ability to use ZFS as the backing file system. The Lustre File System ChecK (LFSCK) feature can verify and repair the MDS Object Index (OI) while the file system is in use, after a file-level backup/restore or in case of MDS corruption. The server-side IO statistics were enhanced to allow integration with batch job schedulers such as SLURM to track per-job statistics. Client-side software was updated to work with Linux kernels up to version 3.0. Lustre 2.4, released in May 2013, added a considerable number of major features, many funded directly through OpenSFS. Distributed Namespace Environment (DNE) allows horizontal metadata capacity and performance scaling for 2.4 clients, by allowing subdirectory trees of a single namespace to be located on separate MDTs. ZFS can now be used as the backing filesystem for both MDT and OST storage. The LFSCK feature added the ability to scan and verify the internal consistency of the MDT FID and LinkEA attributes. The Network Request Scheduler[39] [40](NRS) adds policies to optimize client request processing for disk ordering or fairness. Clients can optionally send bulk RPCs up to 4 MB in size. Client-side software was updated to work with Linux kernels up to version 3.6, and is still interoperable with 1.8 clients. Lustre 2.5, released in October 2013, added the highly anticipated feature, Hierarchical Storage Management (HSM). A core requirement in enterprise environments, HSM allows customers to easily implement tiered storage solutions in their operational environment. This release is the current OpenSFS-designated Maintenance Release branch of Lustre.[41][42][43][44] The most recent maintenance version is 2.5.3 and was released in September 2014.[45] Lustre 2.6, released in July 2014,[46] was a more modest release feature wise, adding LFSCK functionality to do local consistency checks on the OST as well as consistency checks between MDT and OST objects. The NRS Token Bucket Filter[47] (TBF) policy was added. Single-client IO performance was improved over the previous releases.[48] This release also added a preview of DNE striped directories, allowing single large directories to be stored on multiple MDTs to improve performance and scalability. Lustre 2.7, released in March 2015,[49] added LFSCK functionality to verify DNE consistency of remote and striped directories between multiple MDTs. Dynamic LNet Config adds the ability to configure and modify LNet network interfaces, routes, and routers at runtime. A new evaluation feature was added for UID/GID mapping for clients with different administrative domains, along with improvements to the DNE striped directory functionality. Lustre 2.8, released in March 2016,[50] finished the DNE striped directory feature, including support for migrating directories between MDTs, and cross-MDT hard link and rename. As well, it included improved support for Security-Enhanced Linux (SELinux) on the client, Kerberos authentication and RPC encryption over the network, and performance improvements for LFSCK. Lustre 2.9 was released in December 2016[51] and included a number of features related to security and performance. The Shared Secret Key security flavour uses the same GSSAPI mechanism as Kerberos to provide client and server node authentication, and RPC message integrity and security (encryption). The Nodemap feature allows categorizing client nodes into groups and then mapping the UID/GID for those clients, allowing remotely administered clients to transparently use a shared filesystem without having a single set of UID/GIDs for all client nodes. The subdirectory mount feature allows clients to mount a subset of the filesystem namespace from the MDS. This release also added support for up to 16MiB RPCs for more efficient I/O submission to disk, and added the Lustre 2.10 was released in July 2017[52] and has a number of significant improvements. The LNet Multi-Rail (LMR) feature allows bonding multiple network interfaces (InfiniBand, Omni-Path, and/or Ethernet) on a client and server to increase aggregate I/O bandwidth. File layouts can now be constructed of multiple components, based on the file offset, which allow different parameters such as stripe count, OST pool, etc. to be determined based on the file size. The NRS Token Bucket Filter (TBF) server-side scheduler has implemented new rule types, including RPC-type scheduling and the ability to specify multiple parameters such as JobID and NID for rule matching. Tools for managing ZFS snapshots of Lustre filesystems have been added, to simplify the creation, mounting, and management of MDT and OST ZFS snapshots as separate Lustre mountpoints. Lustre 2.11 was released in April 2018[53] and contains two significant new features, and several smaller features. The File Level Redundancy (FLR) feature expands on the 2.10 PFL implementation, adding the ability to specify mirrored file layouts for improved availability in case of storage or server failure and/or improved performance with highly concurrent reads. The Data-on-MDT (DoM) feature allows small (few MiB) files to be stored on the MDT to leverage typical flash-based RAID-10 storage for lower latency and reduced IO contention, instead of the typical HDD RAID-6 storage used on OSTs. As well, the LNet Dynamic Discovery feature allows auto-configuration of LNet Multi-Rail between peers that share an LNet network. The LDLM Lock Ahead feature allows appropriately modified applications and libraries to pre-fetch DLM extent locks from the OSTs for files, if the application knows (or predicts) that this file extent will be modified in the near future, which can reduce lock contention for multiple clients writing to the same file. Lustre 2.12 was released on December 21, 2018[54] and focused on improving Lustre usability and stability, with improvements the performance and functionality of the FLR and DoM features added in Lustre 2.11, as well as smaller changes to NRS TBF, HSM, and JobStats. It added [https://wiki.whamcloud.com/display/LNet/LNet+Health LNet Network Health] to allow the LNet Multi-Rail feature from Lustre 2.10 to better handle network faults when a node has multiple network interfaces. The Lazy Size on MDT (LSOM) feature allows storing an estimate of the file size on the MDT for use by policy engines, filesystem scanners, and other management tools that can more efficiently make decisions about files without a fully accurate file sizes or blocks count without having to query the OSTs for this information. This release also added the ability to manually restripe an existing directory across multiple MDTs, to allow migration of directories with large numbers of files to use the capacity and performance of several MDS nodes. The Lustre RPC data checksum added SCSI T10-PI integrated data checksums from the client to the kernel block layer, SCSI host adapter, and T10-enabled hard drives. ArchitectureA Lustre file system has three major functional units:
The MDT, OST, and client may be on the same node (usually for testing purposes), but in typical production installations these devices are on separate nodes communicating over a network. Each MDT and OST may be part of only a single filesystem, though it is possible to have multiple MDTs or OSTs on a single node that are part of different filesystems. The Lustre Network (LNet) layer can use several types of network interconnects, including native InfiniBand verbs, Omni-Path, RoCE, and iWARP via OFED, TCP/IP on Ethernet, and other proprietary network technologies such as the Cray Gemini interconnect. In Lustre 2.3 and earlier, Myrinet, Quadrics, Cray SeaStar and RapidArray networks were also supported, but these network drivers were deprecated when these networks were no longer commercially available, and support was removed completely in Lustre 2.8. Lustre will take advantage of remote direct memory access (RDMA) transfers, when available, to improve throughput and reduce CPU usage. The storage used for the MDT and OST backing filesystems is normally provided by hardware RAID devices, though will work with any block devices. Since Lustre 2.4, the MDT and OST can also use ZFS for the backing filesystem in addition to ext4, allowing them to effectively use JBOD storage instead of hardware RAID devices. The Lustre OSS and MDS servers read, write, and modify data in the format imposed by the backing filesystem and return this data to the clients. This allows Lustre to take advantage of improvements and features in the underlying filesystem, such as compression and data checksums in ZFS. Clients do not have any direct access to the underlying storage, which ensures that a malfunctioning or malicious client cannot corrupt the filesystem structure. An OST is a dedicated filesystem that exports an interface to byte ranges of file objects for read/write operations, with extent locks to protect data consistency. An MDT is a dedicated filesystem that stores inodes, directories, POSIX and extended file attributes, controls file access permissions/ACLs, and tells clients the layout of the object(s) that make up each regular file. MDTs and OSTs currently use either an enhanced version of ext4 called ldiskfs, or ZFS/DMU for back-end data storage to store files/objects[55] using the open source ZFS-on-Linux port.[56] The client mounts the Lustre filesystem locally with a VFS driver for the Linux kernel that connects the client to the server(s). Upon initial mount, the client is provided a File Identifier (FID) for the root directory of the mountpoint. When the client accesses a file, it performs a filename lookup on the MDS. When the MDS filename lookup is complete and the user and client have permission to access and/or create the file, either the layout of an existing file is returned to the client or a new file is created on behalf of the client, if requested. For read or write operations, the client then interprets the file layout in the logical object volume (LOV) layer, which maps the file logical offset and size to one or more objects. The client then locks the file range being operated on and executes one or more parallel read or write operations directly to the OSS nodes that hold the data objects. With this approach, bottlenecks for client-to-OSS communications are eliminated, so the total bandwidth available for the clients to read and write data scales almost linearly with the number of OSTs in the filesystem. After the initial lookup of the file layout, the MDS is not normally involved in file IO operations since all block allocation and data IO is managed internally by the OST. Clients do not directly modify the objects or data on the OST filesystems, but instead delegate this task to OSS nodes. This approach ensures scalability for large-scale clusters and supercomputers, as well as improved security and reliability. In contrast, shared block-based filesystems such as GPFS and OCFS allow direct access to the underlying storage by all of the clients in the filesystem, which requires a large back-end SAN attached to all clients, and increases the risk of filesystem corruption from misbehaving/defective clients. ImplementationIn a typical Lustre installation on a Linux client, a Lustre filesystem driver module is loaded into the kernel and the filesystem is mounted like any other local or network filesystem. Client applications see a single, unified filesystem even though it may be composed of tens to thousands of individual servers and MDT/OST filesystems. On some massively parallel processor (MPP) installations, computational processors can access a Lustre file system by redirecting their I/O requests to a dedicated I/O node configured as a Lustre client. This approach is used in the Blue Gene installation[57] at Lawrence Livermore National Laboratory. Another approach used in the early years of Lustre is the liblustre library on the Cray XT3 using the Catamount operating system on systems such as Sandia Red Storm,[58] which provided userspace applications with direct filesystem access. Liblustre was a user-level library that allows computational processors to mount and use the Lustre file system as a client. Using liblustre, the computational processors could access a Lustre file system even if the service node on which the job was launched is not a Linux client. Liblustre allowed data movement directly between application space and the Lustre OSSs without requiring an intervening data copy through the kernel, thus providing access from computational processors to the Lustre file system directly in a constrained operating environment. The liblustre functionality was deleted from Lustre 2.7.0 after having been disabled since Lustre 2.6.0, and was untested since Lustre 2.3.0. In Linux Kernel version 4.18, the incomplete port of the Lustre client was removed from the kernel staging area in order to speed up development and porting to newer kernels.[59] The out-of-tree Lustre client and server is still available for RHEL, SLES, and Ubuntu distro kernels, as well as vanilla kernels. Data objects and file stripingIn a traditional Unix disk file system, an inode data structure contains basic information about each file, such as where the data contained in the file is stored. The Lustre file system also uses inodes, but inodes on MDTs point to one or more OST objects associated with the file rather than to data blocks. These objects are implemented as files on the OSTs. When a client opens a file, the file open operation transfers a set of object identifiers and their layout from the MDS to the client, so that the client can directly interact with the OSS node where the object is stored. This allows the client to perform I/O in parallel across all of the OST objects in the file without further communication with the MDS. If only one OST object is associated with an MDT inode, that object contains all the data in the Lustre file. When more than one object is associated with a file, data in the file is "striped" in chunks in a round-robin manner across the OST objects similar to RAID 0. Striping a file over multiple OST objects provides significant performance benefits if there is a need for high bandwidth access to a single large file. When striping is used, the maximum file size is not limited by the size of a single target. Capacity and aggregate I/O bandwidth scale with the number of OSTs a file is striped over. Also, since the locking of each object is managed independently for each OST, adding more stripes (one per OST) scales the file I/O locking capacity of the file proportionately. Each file created in the filesystem may specify different layout parameters, such as the stripe count (number of OST objects making up that file), stripe size (unit of data stored on each OST before moving to the next), and OST selection, so that performance and capacity can be tuned optimally for each file. When many application threads are reading or writing to separate files in parallel, it is optimal to have a single stripe per file, since the application is providing its own parallelism. When there are many threads reading or writing a single large file concurrently, then it is optimal to have one stripe on each OST to maximize the performance and capacity of that file. In the Lustre 2.10 release, the ability to specify composite layouts was added to allow files to have different layout parameters for different regions of the file. The Progressive File Layout (PFL) feature uses composite layouts to improve file IO performance over a wider range of workloads, as well as simplify usage and administration. For example, a small PFL file can have a single stripe for low access overhead, while larger files can have many stripes for high aggregate bandwidth and better OST load balancing. The composite layouts are further enhanced in the 2.11 release with the File Level Redundancy (FLR) feature, which allows a file to have multiple overlapping layouts for a file, providing RAID 0+1 redundancy for these files as well as improved read performance. The Lustre 2.11 release also added the Data-on-Metadata (DoM) feature, which allows the first component of a PFL file to be stored directly on the MDT with the inode. This reduces overhead for accessing small files, both in terms of space usage (no OST object is needed) as well as network usage (fewer RPCs needed to access the data). DoM also improves performance for small files if the MDT is SSD-based, while the OSTs are disk-based. Metadata objects and DNE remote or striped directoriesWhen client initially mounts a filesystem, it is provided the 128-bit Lustre File Identifier (FID, composed of the 64-bit Sequence number, 32-bit Object ID, and 32-bit Version) of the root directory for the mountpoint. When doing a filename lookup, the client performs a lookup of each pathname component by mapping the parent directory FID Sequence number to a specific MDT via the FID Location Database (FLDB), and then does a lookup on the MDS managing this MDT using the parent FID and filename. The MDS will return the FID for the requested pathname component along with a DLM lock. Once the MDT of the last parent directory is determined, further directory operations (for non-striped directories) take place exclusively on that MDT, avoiding contention between MDTs. For DNE striped directories, the per-directory layout stored on the parent directory provides a hash function and a list of MDT directory FIDs across which the directory is distributed. The Logical Metadata Volume (LMV) on the client hashes the filename and maps it to a specific MDT directory shard, which will handle further operations on that file in an identical manner to a non-striped directory. For readdir() operations, the entries from each directory shard are returned to the client sorted in the local MDT directory hash order, and the client performs a merge sort to interleave the filenames in hash order so that a single 64-bit cookie can be used to determine the current offset within the directory. LockingThe Lustre distributed lock manager (LDLM), implemented in the OpenVMS style, protects the integrity of each file's data and metadata. Access and modification of a Lustre file is completely cache coherent among all of the clients. Metadata locks are managed by the MDT that stores the inode for the file, using FID as the resource name. The metadata locks are split into separate bits that protect the lookup of the file (file owner and group, permission and mode, and access control list (ACL)), the state of the inode (directory size, directory contents, link count, timestamps), layout (file striping, since Lustre 2.4), and extended attributes (xattrs, since Lustre 2.5). A client can fetch multiple metadata lock bits for a single inode with a single RPC request, but currently they are only ever granted a read lock for the inode. The MDS manages all modifications to the inode in order to avoid lock resource contention and is currently the only node that gets write locks on inodes. File data locks are managed by the OST on which each object of the file is striped, using byte-range extent locks. Clients can be granted overlapping read extent locks for part or all of the file, allowing multiple concurrent readers of the same file, and/or non-overlapping write extent locks for independent regions of the file. This allows many Lustre clients to access a single file concurrently for both read and write, avoiding bottlenecks during file I/O. In practice, because Linux clients manage their data cache in units of pages, the clients will request locks that are always an integer multiple of the page size (4096 bytes on most clients). When a client is requesting an extent lock the OST may grant a lock for a larger extent than originally requested, in order to reduce the number of lock requests that the client makes. The actual size of the granted lock depends on several factors, including the number of currently granted locks on that object, whether there are conflicting write locks for the requested lock extent, and the number of pending lock requests on that object. The granted lock is never smaller than the originally requested extent. OST extent locks use the Lustre FID of the object as the resource name for the lock. Since the number of extent lock servers scales with the number of OSTs in the filesystem, this also scales the aggregate locking performance of the filesystem, and of a single file if it is striped over multiple OSTs. NetworkingThe communication between the Lustre clients and servers is implemented using Lustre Networking (LNet), which was originally based on the Sandia Portals network programming application programming interface. Disk storage is connected to the Lustre MDS and OSS server nodes using direct attached storage (SAS, FC, iSCSI) or traditional storage area network (SAN) technologies, which is independent of the client-to-server network. LNet can use many commonly used network types, such as InfiniBand and TCP (commonly Ethernet) networks, and allows simultaneous availability across multiple network types with routing between them. Remote Direct Memory Access (RDMA) is used for data and metadata transfer between nodes when provided by the underlying networks, such as InfiniBand, RoCE, iWARP, and Omni-Path. High availability and recovery features enable transparent recovery in conjunction with failover servers. Since Lustre 2.10 the LNet Multi-Rail (MR) feature[60] allows link aggregation of two or more network interfaces between a client and server to improve bandwidth. The LNet interface types do not need to be the same network type. In 2.12 Multi-Rail was enhanced to improve fault tolerance if multiple network interfaces are available between peers. LNet provides end-to-end throughput over Gigabit Ethernet networks in excess of 100 MB/s,[61] throughput up to 3 GB/s using InfiniBand quad data rate (QDR) links, and throughput over 1 GB/s across 10 Gigabit Ethernet interfaces.{{Citation needed|date=September 2013}} High availabilityLustre file system high availability features include a robust failover and recovery mechanism, making server failures and reboots transparent. Version interoperability between successive minor versions of the Lustre software enables a server to be upgraded by taking it offline (or failing it over to a standby server), performing the upgrade, and restarting it, while all active jobs continue to run, experiencing a delay while the backup server takes over the storage. Lustre MDSes are configured as an active/passive pair, or one or more active/active MDS pairs with DNE, while OSSes are typically deployed in an active/active configuration that provides redundancy without extra overhead. Often the standby MDS for one filesystem is the MGS and/or monitoring node, or the active MDS for another file system, so no nodes are idle in the cluster. HSM (Hierarchical Storage Management)Lustre provides the capability to have multiple storage tiers within a single filesystem namespace. It allows traditional HSM functionality to copy (archive) files off the primary filesystem to a secondary archive storage tier. The archive tier is typically a tape-based system, that is often fronted by a disk cache. Once a file is archived, it can be released from the main filesystem, leaving only a stub that references the archive copy. If a released file is opened, the Coordinator blocks the open, sends a restore request to a copytool, and then completes the open once the copytool has completed restoring the file. In addition to external storage tiering, it is possible to have multiple storage tiers within a single filesystem namespace. OSTs of different types (e.g. HDD and SSD) can be declared in named storage pools. The OST pools can be selected when specifying file layouts, and different pools can be used within a single PFL file layout. Files can be migrated between storage tiers either manually or under control of the Policy Engine. Since Lustre 2.11, it is also possible to mirror a file to different OST pools with a FLR file layout, for example to pre-stage files into flash for a computing job. HSM includes some additional Lustre components to manage the interface between the primary filesystem and the archive:
HSM also defines new states for files including: [66]
DeploymentsLustre is used by many of the TOP500 supercomputers and large multi-cluster sites. Six of the top 10 and more than 60 of the top 100 supercomputers use Lustre file systems. These include: K computer at the RIKEN Advanced Institute for Computational Science,[9] the Tianhe-1A at the National Supercomputing Center in Tianjin, China, the Jaguar and Titan at Oak Ridge National Laboratory (ORNL), Blue Waters at the University of Illinois, and Sequoia and Blue Gene/L at Lawrence Livermore National Laboratory (LLNL). There are also large Lustre filesystems at the National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Texas Advanced Computing Center, Brazilian National Laboratory of Scientific Computing,[67] and NASA[68] in North America, in Asia at Tokyo Institute of Technology,[69] in Europe at CEA,[70][71] and many others. Commercial technical supportCommercial technical support for Lustre is often bundled along with the computing system or storage hardware sold by the vendor. Some vendors include Cray,[72] Dell,[73] Hewlett-Packard (as the HP StorageWorks Scalable File Share, circa 2004 through 2008),[74]Groupe Bull, Silicon Graphics International,[75][76] Fujitsu.[77] Vendors selling storage hardware with bundled Lustre support include Hitachi Data Systems,[78] DataDirect Networks (DDN),[79] NetApp, Seagate Technology,[80] and others. It is also possible to get software-only support for Lustre file systems from some vendors, including Whamcloud.[81]See also{{Portal|Free and open-source software}}
References1. ^1 {{cite web|url = https://lwn.net/Articles/63536/|title = Lustre 1.0 released|date = December 17, 2003|accessdate = March 15, 2015|website = Linux Weekly News|publisher = LWN.net|last = Corbet|first = Jonathon}} 2. ^{{cite web|url = http://lustre.org/lustre-2-10-6-released/ | title = Lustre 2.10.6 released | date = December 12, 2018 | accessdate = December 12, 2018 | website = Lustre.org}} 3. ^{{cite web|url=https://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.pdf|title=Lustre* Software Release 2.x Operations Manual|date= August 4, 2002 |author= Oracle Corporation / Intel Corporation |work= Instruction Manual|publisher=Intel |accessdate= May 19, 2015}} 4. ^{{cite web| url=http://www.lustre.org/| title=Lustre Home|archiveurl = https://web.archive.org/web/20010331103824/http://www.lustre.org/|archivedate = March 31, 2001 |accessdate= September 23, 2013}} 5. ^{{cite web|url=http://opensfs.org/press-releases/lustre-file-system-version-2-4-released/| title=Lustre File System, Version 2.4 Released| publisher=Open Scalable File Systems| accessdate = 2014-10-18}} 6. ^{{cite web|url=http://www.cnet.com/news/open-source-lustre-gets-supercomputing-nod/| title=Open-source Lustre gets supercomputing nod| accessdate = 2014-10-18}} 7. ^{{cite web|url=http://www.hpcwire.com/2013/02/21/xyratex_captures_oracle_s_lustre/| title=Xyratex Captures Oracle’s Lustre| publisher=HPCWire| accessdate = 2014-10-18}} 8. ^{{cite web|url=https://www.olcf.ornl.gov/kb_articles/titan-system-overview/| title=Titan System Overview| publisher=Oak Ridge National Laboratory| accessdate = 2013-09-19}} 9. ^1 2 {{cite web |url = http://zfsonlinux.org/docs/LUG11_ZFS_on_Linux_for_Lustre.pdf |title = ZFS on Linux for Lustre |author = Brian Behlendorf |publisher = Lawrence Livermore National Laboratory |accessdate = 2014-10-18 |deadurl = yes |archiveurl = https://web.archive.org/web/20141031152502/http://zfsonlinux.org/docs/LUG11_ZFS_on_Linux_for_Lustre.pdf |archivedate = 2014-10-31 |df = }} 10. ^{{cite web | url = https://www.olcf.ornl.gov/kb_articles/spider-the-center-wide-lustre-file-system/ | title = Spider Center-Wide File System | accessdate = 2012-02-02 | publisher = Oak Ridge Leadership Computing Facility}} 11. ^{{cite web | url = http://www.opensfs.org/wp-content/uploads/2011/11/Rock-Hard1.pdf | title = Rock-Hard Lustre: Trends in Scalability and Quality | accessdate = 2012-02-02 | publisher = Nathan Rutman, Xyratex}} 12. ^{{YouTube |id=2p11g82SY1E |title= Lustre File System presentation, November 2007 }} By Peter Braam, November 10, 2007 13. ^{{Cite web |title=Company |work=old web site |publisher=Cluster File Systems, Inc. |url=http://www.clusterfs.com/company.html |archivedate=August 12, 2007 |archiveurl=https://web.archive.org/web/20070812020737/http://www.clusterfs.com/company.html |deadurl=bot: unknown |df= }} 14. ^{{cite web|url=https://asc.llnl.gov/computing_resources/bluegenel/talks/braam.pdf|title=Lustre, The Inter-Galactic File System|date= August 4, 2002 |author= Peter J. Braam |work= Presentation slides|publisher=Lawrence Livermore National Laboratory |accessdate= September 23, 2013}} 15. ^{{cite web |title= The Ultra-Scalable HPTC Lustre Filesystem |author= R. Kent Koeninger |date= June 2003 |work= Slides for presentation at Cluster World 2003 |url= http://www.linuxclustersinstitute.org/conferences/archive/2003/PDF/C04-Koeninger_K.pdf |accessdate= September 23, 2013 }} 16. ^{{cite web| url= http://www.linux-magazine.com/online/news/sun_assimilates_lustre_filesystem?category=13402| title= Sun Assimilates Lustre Filesystem| date= September 13, 2007 |author= Britta Wülfing| publisher=Linux Magazine |accessdate= September 23, 2013}} 17. ^{{cite news |title= Sun Microsystems Expands High Performance Computing Portfolio with Definitive Agreement to Acquire Assets of Cluster File Systems, Including the Lustre File System |work= Press release |publisher= Sun Microsystems |date= September 12, 2007 |url= http://www.sun.com/aboutsun/pr/2007-09/sunflash.20070912.2.xml |deadurl=yes |archiveurl= https://web.archive.org/web/20071002091821/http://www.sun.com/aboutsun/pr/2007-09/sunflash.20070912.2.xml |archivedate= October 2, 2007 |accessdate= September 23, 2013 }} 18. ^{{cite web| url=http://insidehpc.com/2011/01/10/inside-track-oracle-has-kicked-lustre-to-the-curb/| title=Oracle has Kicked Lustre to the Curb| date=2011-01-10| publisher=Inside HPC}} 19. ^{{cite news |url= http://insidehpc.com/2010/08/20/whamcloud-aims-to-make-sure-lustre-has-a-future-in-hpc/| title=Whamcloud aims to make sure Lustre has a future in HPC| date= August 20, 2010 |author= J. Leidel|work= Inside HPC |accessdate= September 23, 2013 }} 20. ^1 {{cite news |title= Xyratex Advances Lustre® Initiative, Assumes Ownership of Related Assets |work= Press release |date= February 19, 2013 |publisher= Xyratex |url= http://www.xyratex.com/news/press-releases/xyratex-advances-lustre%C2%AE-initiative-assumes-ownership-related-assets |accessdate= September 18, 2013 }} 21. ^{{cite news |title= Bojanic & Braam Getting Lustre Band Back Together at Xyratex |date= November 9, 2010 |author= Rich Brueckner |work= Inside HPC |url= http://insidehpc.com/2010/11/09/bojanic-braam-getting-lustre-band-back-together-at-xyratex/ |accessdate= September 23, 2013 }} 22. ^{{cite web| url = http://insidehpc.com/2011/01/04/whamcloud-staffs-up-for-brighter-lustre/| title = Whamcloud Staffs up for Brighter Lustre|work= Inside HPC |author= Rich Brueckner |date= January 4, 2011 |accessdate= September 18, 2013}} 23. ^{{cite news| url = http://www.hpcwire.com/hpcwire/2011-08-16/whamcloud_signs_multi-year_lustre_development_contract_with_opensfs.html| title = Whamcloud Signs Multi-Year Lustre Development Contract With OpenSFS| date = August 16, 2011 |work= Press release| publisher = HPC Wire |accessdate= September 23, 2013}} 24. ^{{cite web| url = http://www.opensfs.org/wp-content/uploads/2011/11/SC11-OpenSFS-Update.pdf| title = OpenSFS Update |author= Galen Shipman| date = November 18, 2011 |work= Slides for Supercomputing 2011 presentation| publisher = Open Scalable File Systems |accessdate= September 23, 2013}} 25. ^{{cite web| url = https://www.reuters.com/article/2011/11/15/idUS215861+15-Nov-2011+MW20111115| title = OpenSFS and Whamcloud Sign Lustre Community Tree Development Agreement| date = November 15, 2011 |author= Whamcloud|work= Press release |accessdate= September 23, 2013}} 26. ^{{cite web| url = http://www.pcworld.com/article/259328/intel_purchases_lustre_purveyor_whamcloud.html| title = Intel Purchases Lustre Purveyor Whamcloud| date = 2012-07-16| publisher = PC World| author = Joab Jackson}} 27. ^{{cite web| url = https://www.theregister.co.uk/2012/07/16/intel_buys_whamcloud/| title = Intel gobbles Lustre file system expert Whamcloud| date = 2012-07-16| publisher = The Register| author = Timothy Prickett Morgan}} 28. ^{{cite web| url = https://www.theregister.co.uk/2012/07/11/doe_fastforward_amd_whamcloud/| title = DOE doles out cash to AMD, Whamcloud for exascale research| date = 2012-07-11| publisher = The Register| author = Timothy Prickett Morgan}} 29. ^{{cite news |title= Intel Carves Mainstream Highway for Lustre |author= Nicole Hemsoth |date= June 12, 2013 |work= HPC Wire |url= http://www.hpcwire.com/2013/06/12/intel_builds_mainstream_highways_for_lustre/ |accessdate= September 23, 2013 }} 30. ^{{cite web|last=Brueckner|first=Rich|title=With New RFP, OpenSFS to Invest in Critical Open Source Technologies for HPC|url=http://insidehpc.com/2013/02/21/with-new-rfp-opensfs-to-invest-in-critical-open-source-technologies-for-hpc/|publisher=insideHPC|accessdate=1 October 2013}} 31. ^{{cite web|title=Seagate Donates Lustre.org Back to the User Community|url=http://insidehpc.com/2014/04/seagate-donates-lustre-org-user-community/|accessdate=9 September 2014}} 32. ^{{cite web|url=https://www.nextplatform.com/2018/06/27/ddn-breathes-new-life-into-lustre-file-system/|title=DDN Breathes New Life Into Lustre File System|date=June 27, 2018|author=Daniel Robinson}} 33. ^{{cite web |url = http://www.taborcommunications.com/dsstar/03/1125/107031.html |title = Lustre Helps Power Third Fastest Supercomputer |publisher = DSStar |deadurl = yes |archiveurl = https://archive.is/20130203232617/http://www.taborcommunications.com/dsstar/03/1125/107031.html |archivedate = 2013-02-03 |df = }} 34. ^{{cite web| url=http://www.top500.org/system/6085| title=MCR Linux Cluster Xeon 2.4 GHz – Quadrics| publisher=Top500.Org}} 35. ^{{cite web| url=http://www.hpcuserforum.com/presentations/Tucson/SUN%20%20Lustre_Update-080615.pdf| title = Lustre Roadmap and Future Plans |date= June 15, 2008 |work= Presentation to Sun HPC Consortium| accessdate = September 23, 2013 |author= Peter Bojanic| publisher = Sun Microsystems}} 36. ^{{cite web| url = http://www.whamcloud.com/news-and-events/opensfs-announces-collaborative-effort-to-support-lustre-2-1-community-distribution/ |deadurl= yes |archiveurl= https://web.archive.org/web/20110523050307/http://www.whamcloud.com/news-and-events/opensfs-announces-collaborative-effort-to-support-lustre-2-1-community-distribution/ |archivedate= May 23, 2011 |date= February 8, 2011| title = OpenSFS Announces Collaborative Effort to Support Lustre 2.1 Community Distribution| accessdate = December 13, 2016| publisher = Open Scalable File Systems }} 37. ^{{cite web| url = http://www.marketwire.com/press-release/lustre-21-released-1567596.htm| title = Lustre 2.1 Released| accessdate = 2012-02-02}} 38. ^{{cite web| url = https://finance.yahoo.com/news/lustre-2-2-released-165800113.html| title = Lustre 2.2 Released| accessdate = 2012-05-08| publisher = Yahoo! Finance}} 39. ^{{cite web|url = http://wiki.lustre.org/images/2/22/A_Novel_Network_Request_Scheduler_for_a_Large_Scale_Storage_System.pdf|title = A Novel Network Request Scheduler for a Large Scale Storage System|date = June 2009|website = Lustre Wiki|publisher = OpenSFS}} 40. ^{{cite web|url = https://www.researchgate.net/publication/220232795_A_Novel_network_request_scheduler_for_a_large_scale_storage_system|title = A Novel Network Request Scheduler for a Large Scale Storage System|date = June 2009|website = Lustre Wiki|publisher = OpenSFS}} 41. ^{{cite web|last=Prickett Morgan|first=Timothy|title=OpenSFS Announces Availability of Lustre 2.5|url=http://www.enterprisetech.com/2013/11/05/opensfs-announces-availability-lustre-2-5/|publisher=EnterpriseTech}} 42. ^{{cite web|last=Brueckner|first=Rich|title=Video: New Lustre 2.5 Release Offers HSM Capabilities|url=http://inside-bigdata.com/2013/11/05/video-new-lustre-2-5-release-offers-hsm-capabilities/|publisher=Inside Big Data|accessdate=11 December 2013}} 43. ^{{cite web|last=Hemsoth|first=Nicole|title=Lustre Gets Business Class Upgrade with HSM|url=http://archive.hpcwire.com/hpcwire/2013-11-06/lustre_scores_business_class_upgrade_with_hsm.html|publisher=HPCwire|accessdate=11 December 2013}} 44. ^{{cite web|title=Lustre 2.5|url=http://www.scientific-computing.com/products/product_details.php?product_id=1727|publisher=Scientific Computing World|accessdate=11 December 2013}} 45. ^{{cite web|url = https://lists.01.org/pipermail/hpdd-discuss/2014-September/001211.html|title = Lustre 2.5.3 released|date = September 9, 2014|accessdate = October 21, 2014|website = HPDD-discuss mailing list archive|publisher = |last = Jones |first = Peter}}{{cite web|url = http://wiki.lustre.org/Retired_Release_Terminology | title = Retired Release Terminology | date = Dec 7, 2015 | accessdate = January 18, 2016 | website = Lustre Wiki|publisher = |last = Morrone|first = Chris}} 46. ^{{cite web|url = https://lists.01.org/pipermail/hpdd-discuss/2014-July/001153.html|title = Lustre 2.6.0 released|date = July 30, 2014|accessdate = October 21, 2014|website = HPDD-discuss mailing list archive|publisher = |last = |first = }} 47. ^{{cite web|url=http://cdn.opensfs.org/wp-content/uploads/2014/10/7-DDN_LiXi_lustre_QoS.pdf|title=Lustre QoS Based on NRS Policy of Token Bucket Filter|first=Shuichi|last=Ihara|date=2014-10-14}} 48. ^{{cite web|url=http://opensfs.org/wp-content/uploads/2014/04/D1_S6_LustreClientIOPerformanceImprovements.pdf|title=Demonstrating the Improvement in the Performance of a Single Lustre Client from Version 1.8 to Version 2.6|first=Andrew|last=Uselton|accessdate=2014-10-18}} 49. ^{{cite web|url = https://lists.01.org/pipermail/hpdd-discuss/2015-March/001829.html|title = Lustre 2.7.0 released|date = March 13, 2015|accessdate = March 15, 2015|website = HPDD-discuss mailing list archive|publisher = |last = Jones|first = Peter}} 50. ^{{cite web|url = http://lists.lustre.org/pipermail/lustre-announce-lustre.org/2016/000137.html|title = Lustre 2.8.0 released|date = March 16, 2016|accessdate = March 28, 2016|website = Lustre-announce mailing list archive|publisher = OpenSFS|last = Jones|first = Peter}} 51. ^{{cite web|url = http://wiki.lustre.org/Lustre_2.9.0_Changelog|title = Lustre 2.9.0 Changelog|date = December 7, 2016|accessdate = December 8, 2016|website = Lustre Wiki|publisher = OpenSFS}} 52. ^{{cite web|url = http://wiki.lustre.org/Lustre_2.10.0_Changelog|title = Lustre 2.10.0 Changelog|date = July 13, 2017|accessdate = October 3, 2017|website = Lustre Wiki|publisher = OpenSFS}} 53. ^{{cite web|url = http://wiki.lustre.org/Release_2.11.0|title = Release 2.11.0|date = April 3, 2018|accessdate = April 4, 2018|website = Lustre Wiki|publisher = OpenSFS}} 54. ^1 {{cite web|url = http://wiki.lustre.org/Release_2.12.0|title = Release 2.12.0|date = December 21, 2018|accessdate = February 11, 2019|website = Lustre Wiki|publisher = OpenSFS}} 55. ^{{cite web|url=http://gcn.com/Articles/2008/03/26/Lustre-to-run-on-ZFS.aspx?p=1|title=Lustre to run on ZFS|date=2008-10-26|publisher=Government Computer News}} 56. ^{{cite web|url=http://zfsonlinux.org/lustre.html|title=ZFS on Lustre|date=2011-05-10}} 57. ^{{cite web| url=http://www.tgc.com/hpcwire/hpcwireWWW/04/1015/108577.html| title = DataDirect Selected As Storage Tech Powering BlueGene/L|work= HPC Wire |date= October 15, 2004}} 58. ^{{cite web| url=http://www.sandia.gov/~smkelly/SAND2006-2561C-CUG2006-CatamountDualCore.pdf| title=Catamount Software Architecture with Dual Core Extensions| author=Suzanne M. Kelly| date=2006| accessdate=2016-02-16}} 59. ^{{cite web| url=http://lkml.iu.edu/hypermail/linux/kernel/1806.2/00125.html| title=Linux Kernel 4.18rc1 release notes}} 60. ^{{cite web |author = Shehata, Amir |url = http://wiki.lustre.org/images/7/7c/LUG2016D2_Multi-Rail-LNet-for-Lustre_Shehata_Weber_v2.pdf |title = Multi-Rail LNet for Lustre |publisher = Lustre User Group, April 2016}} 61. ^{{cite web |author = Lafoucrière, Jacques-Charles |url = http://hepix.caspur.it/afs/hepix.org/project/strack/hep_pdf/2007/Spring/Lustre-CEA-hepix2007.pdf |title = Lustre Experience at CEA/DIF |publisher = HEPiX Forum, April 2007 |deadurl = yes |archiveurl = https://www.webcitation.org/65tGGL6n1?url=http://hepix.caspur.it/afs/hepix.org/project/strack/hep_pdf/2007/Spring/Lustre-CEA-hepix2007.pdf |archivedate = 2012-03-03 |df = }} 62. ^{{cite web|url=https://www.eofs.eu/_media/events/lad13/10_aurelien_degremont_lustre_hsm_lad13.pdf|title=LUSTRE/HSM BINDING IS THERE!|date=September 17, 2013|author=Aurélien Degrémont}} 63. ^{{cite web|url=https://www.eofs.eu/_media/events/lad16/08_tsm_copytool_for_lustre_stibor.pdf|title=TSM Copytool for Lustre HSM|author=Thomas Stibor|date=September 20, 2016}} 64. ^{{cite web|url=http://wiki.lustre.org/images/e/ea/Lustre-HSM-in-the-Cloud_Read.pdf|title=Lustre HSM in the Cloud|author=Robert Read|date=March 24, 2015}} 65. ^{{cite web|url=https://github.com/stanford-rc/ct_gdrive/blob/master/README.md|title=Lustre/HSM Google Drive copytool|author=Stéphane Thiell}} 66. ^{{cite web|url=http://wiki.lustre.org/images/4/4d/Lustre_hsm_seminar_lug10.pdf|title=Lustre HSM Project—Lustre User Advanced Seminars|author1=Aurélien Degrémont|author2=Thomas Leibovici|date=April 16, 2009|archiveurl=https://web.archive.org/web/20100525053326/http://wiki.lustre.org/images/4/4d/Lustre_hsm_seminar_lug10.pdf|archivedate=May 25, 2010|deadurl=no|accessdate=May 5, 2018}} 67. ^{{cite web|url=http://www.lncc.br/frame.html |title=LNCC – Laboratório Nacional de Computação Científica |publisher=Lncc.br |date= |accessdate=2015-05-27}} 68. ^{{cite web| url=http://www.nas.nasa.gov/Resources/Systems/pleiades.html| title=Pleiades Supercomputer| date=2008-08-18| publisher=www.nas.nasa.gov}} 69. ^{{cite web| url=http://www.top500.org/system/8216| title=TOP500 List – November 2006| publisher=TOP500.Org}} 70. ^{{cite web| url=http://www.top500.org/system/8237| title=TOP500 List – June 2006| publisher=TOP500.Org}} 71. ^{{cite web| url=http://www.hpcwire.com/hpcwire/2012-01-25/french_atomic_energy_group_expands_hpc_file_system_to_11_petabytes.html| title=French Atomic Energy Group Expands HPC File System to 11 Petabytes| date=2012-06-15| publisher=HPCwire.com}} 72. ^{{cite web| url=http://www.cray.com/products/storage/sonexion| title=Sonexion Scale-out Lustre Storage System| date=2015-04-14}} 73. ^{{cite web| url=http://www.dellhpcsolutions.com/| title=Dell HPC Solutions| date=2015-04-14}} 74. ^{{Cite web |title= HP StorageWorks Scalable File Share |publisher= Hewlett-Packard |url= http://h20311.www2.hp.com/HPC/cache/276636-0-0-0-121.html |archivedate= June 12, 2008 |archiveurl= https://web.archive.org/web/20080612182519/http://h20311.www2.hp.com/HPC/cache/276636-0-0-0-121.html |accessdate= December 13, 2016 }} 75. ^{{cite web| url=https://www.sgi.com/products/storage/lustre/| title=SGI – Products: Storage: Lustre Solutions| date=2015-04-14}} 76. ^{{cite web| url=http://www.sgi.com/services/professional/file_management.html| title= File management consulting| work= SGI Professional Services web site}} 77. ^{{cite web| url=http://www.fujitsu.com/global/about/resources/news/press-releases/2011/1017-01.html| title=Fujitsu Releases World's Highest-Performance File System – FEFS scalable file system software for advanced x86 HPC cluster systems| date=2015-06-13}} 78. ^{{cite web| url=http://www.hds.com/solutions/industries/oil-and-gas/exploration-and-production/high-throughput-storage-solutions.html| title=High Throughput Storage Solutions with Lustre| date=2015-04-14}} 79. ^{{cite web| url=http://www.ddn.com/products/lustre-file-system-exascaler/| title=Exascaler: Massively Scalable, High Performance, Lustre File System Appliance| date=2015-04-14}} 80. ^{{cite web| url=http://www.seagate.com/products/enterprise-servers-storage/enterprise-storage-systems/clustered-file-systems/| title=ClusterStor Parallel Storage System| date=2015-04-14}} 81. ^{{cite web| url= http://whamcloud.com/support/| title=Lustre Support| date=2018-11-27}} 82. ^{{cite web|last1=Black|first1=Doug|title=Cray Moves to Acquire the Seagate ClusterStor Line|url=https://www.hpcwire.com/2017/07/28/cray-moves-acquire-seagate-clusterstor-line/|website=HPCWire|publisher=HPCWire|accessdate=2017-12-01}} External linksInformation wikis
Community foundations
Hardware/software vendors
7 : 2002 software|Computer file systems|Distributed file systems supported by the Linux kernel|Network file systems|Sun Microsystems software|Free special-purpose file systems|Distributed file systems |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。