Fusion-io NVMFS (original) (raw)
SanDisk/Fusion-io's NVMFS file system, formerly known as Direct File System (DFS), accesses flash memory via a virtual flash storage layer instead of using the traditional block layer API. This file system has two main novel features. First, it lays out files directly in a very large virtual storage address space. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, NVMFS performs better and is much simpler than a traditional Unix file system with similar functionalities. Additionally, this approach avoids the log-on-log performance issues triggered by log-structured file systems. Microbenchmark results show that NVMFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the
Property | Value |
---|---|
dbo:abstract | SanDisk/Fusion-io's NVMFS file system, formerly known as Direct File System (DFS), accesses flash memory via a virtual flash storage layer instead of using the traditional block layer API. This file system has two main novel features. First, it lays out files directly in a very large virtual storage address space. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, NVMFS performs better and is much simpler than a traditional Unix file system with similar functionalities. Additionally, this approach avoids the log-on-log performance issues triggered by log-structured file systems. Microbenchmark results show that NVMFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the virtualized flash storage layer on top of a first-generation Fusion-io ioDrive. For direct access performance, NVMFS is consistently better than ext3 on the same platform, sometimes by 20%. For buffered access performance, NVMFS is also consistently better than ext3, and sometimes by over 149%. Application benchmarks show that NVMFS outperforms ext3 by 7% to 250% while requiring less CPU power. Additionally, I/O latency is lower with NVMFS compared to ext3. (en) |
dbo:wikiPageID | 44551083 (xsd:integer) |
dbo:wikiPageLength | 4392 (xsd:nonNegativeInteger) |
dbo:wikiPageRevisionID | 1059275131 (xsd:integer) |
dbo:wikiPageWikiLink | dbc:File_systems_supported_by_the_Linux_kernel dbr:SanDisk dbc:Flash_file_systems dbr:Fusion-io dbr:Garbage_collection_(computer_science) dbr:Log-structured_file_system dbr:Flash_memory dbc:Compression_file_systems dbr:File_system dbr:Wear_leveling dbr:Ext3 |
dbp:wikiPageUsesTemplate | dbt:Orphan dbt:Reflist dbt:Filesystem |
dct:subject | dbc:File_systems_supported_by_the_Linux_kernel dbc:Flash_file_systems dbc:Compression_file_systems |
rdfs:comment | SanDisk/Fusion-io's NVMFS file system, formerly known as Direct File System (DFS), accesses flash memory via a virtual flash storage layer instead of using the traditional block layer API. This file system has two main novel features. First, it lays out files directly in a very large virtual storage address space. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, NVMFS performs better and is much simpler than a traditional Unix file system with similar functionalities. Additionally, this approach avoids the log-on-log performance issues triggered by log-structured file systems. Microbenchmark results show that NVMFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the (en) |
rdfs:label | Fusion-io NVMFS (en) |
owl:sameAs | freebase:Fusion-io NVMFS yago-res:Fusion-io NVMFS wikidata:Fusion-io NVMFS https://global.dbpedia.org/id/rxQ4 |
prov:wasDerivedFrom | wikipedia-en:Fusion-io_NVMFS?oldid=1059275131&ns=0 |
foaf:isPrimaryTopicOf | wikipedia-en:Fusion-io_NVMFS |
is dbo:wikiPageRedirects of | dbr:NVMFS |
is dbo:wikiPageWikiLink of | dbr:NVMFS |
is foaf:primaryTopic of | wikipedia-en:Fusion-io_NVMFS |