mirror of
https://github.com/deepseek-ai/3FS
synced 2025-06-08 23:55:51 +00:00
fix some typos (#35)
This commit is contained in:
parent
fc346b0071
commit
963ac6f5ad
@ -11,7 +11,7 @@ The Fire-Flyer File System (3FS) is a high-performance distributed file system d
|
|||||||
- **File Interfaces** Develops stateless metadata services backed by a transactional key-value store (e.g., FoundationDB). The file interface is well known and used everywhere. There is no need to learn a new storage API.
|
- **File Interfaces** Develops stateless metadata services backed by a transactional key-value store (e.g., FoundationDB). The file interface is well known and used everywhere. There is no need to learn a new storage API.
|
||||||
|
|
||||||
- Diverse Workloads
|
- Diverse Workloads
|
||||||
- **Data Preparation** Organizes outputs of data analytics pipelines into hierarchical directory structures and manages large volume of intermediate outputs efficiently.
|
- **Data Preparation** Organizes outputs of data analytics pipelines into hierarchical directory structures and manages a large volume of intermediate outputs efficiently.
|
||||||
- **Dataloaders** Eliminates the need for prefetching or shuffling datasets by enabling random access to training samples across compute nodes.
|
- **Dataloaders** Eliminates the need for prefetching or shuffling datasets by enabling random access to training samples across compute nodes.
|
||||||
- **Checkpointing** Supports high-throughput parallel checkpointing for large-scale training.
|
- **Checkpointing** Supports high-throughput parallel checkpointing for large-scale training.
|
||||||
- **KVCache for Inference** Provides a cost-effective alternative to DRAM-based caching, offering high throughput and significantly larger capacity.
|
- **KVCache for Inference** Provides a cost-effective alternative to DRAM-based caching, offering high throughput and significantly larger capacity.
|
||||||
@ -43,18 +43,18 @@ The test cluster comprised 25 storage nodes (2 NUMA domains/node, 1 storage serv
|
|||||||
### 3. KVCache
|
### 3. KVCache
|
||||||
|
|
||||||
KVCache is a technique used to optimize the LLM inference process. It avoids redundant computations by caching the key and value vectors of previous tokens in the decoder layers.
|
KVCache is a technique used to optimize the LLM inference process. It avoids redundant computations by caching the key and value vectors of previous tokens in the decoder layers.
|
||||||
The top figure demonstrates the read throughput of all KVCache clients, highlighting both peak and average values, with peak throughput reaching up to 40 GiB/s. The bottom figure presents the IOPS of remove ops from garbage collection (GC) during the same time period.
|
The top figure demonstrates the read throughput of all KVCache clients, highlighting both peak and average values, with peak throughput reaching up to 40 GiB/s. The bottom figure presents the IOPS of removing ops from garbage collection (GC) during the same time period.
|
||||||
|
|
||||||

|

|
||||||

|

|
||||||
|
|
||||||
## Check out source code
|
## Check out source code
|
||||||
|
|
||||||
Clone 3FS repository from github:
|
Clone 3FS repository from GitHub:
|
||||||
|
|
||||||
git clone https://github.com/deepseek-ai/3fs
|
git clone https://github.com/deepseek-ai/3fs
|
||||||
|
|
||||||
When `deepseek-ai/3fs` has been cloned to local file system, run the
|
When `deepseek-ai/3fs` has been cloned to a local file system, run the
|
||||||
following commands to check out the submodules:
|
following commands to check out the submodules:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
Loading…
Reference in New Issue
Block a user