Awesome
<p align="center"><a href="https://github.com/juicedata/juicefs"><img alt="JuiceFS Logo" src="docs/en/images/juicefs-logo-new.svg" width="50%" /></a></p> <p align="center"> <a href="https://github.com/juicedata/juicefs/actions/workflows/unittests.yml"><img alt="GitHub Workflow Status" src="https://img.shields.io/github/actions/workflow/status/juicedata/juicefs/unittests.yml?branch=main&label=Unit%20Testing" /></a> <a href="https://github.com/juicedata/juicefs/actions/workflows/integrationtests.yml"><img alt="GitHub Workflow Status" src="https://img.shields.io/github/actions/workflow/status/juicedata/juicefs/integrationtests.yml?branch=main&label=Integration%20Testing" /></a> <a href="https://goreportcard.com/report/github.com/juicedata/juicefs"><img alt="Go Report" src="https://goreportcard.com/badge/github.com/juicedata/juicefs" /></a> <a href="https://juicefs.com/docs/community/introduction"><img alt="English doc" src="https://img.shields.io/badge/docs-Doc%20Center-brightgreen" /></a> <a href="https://go.juicefs.com/slack"><img alt="Join Slack" src="https://badgen.net/badge/Slack/Join%20JuiceFS/0abd59?icon=slack" /></a> </p>JuiceFS is a high-performance POSIX file system released under Apache License 2.0, particularly designed for the cloud-native environment. The data, stored via JuiceFS, will be persisted in object storage (e.g. Amazon S3), and the corresponding metadata can be persisted in various database engines such as Redis, MySQL, and TiKV based on the scenarios and requirements.
With JuiceFS, massive cloud storage can be directly connected to big data, machine learning, artificial intelligence, and various application platforms in production environments. Without modifying code, the massive cloud storage can be used as efficiently as local storage.
📖 Document: Quick Start Guide
Highlighted Features
- Fully POSIX-compatible: Use as a local file system, seamlessly docking with existing applications without breaking business workflow.
- Fully Hadoop-compatible: JuiceFS' Hadoop Java SDK is compatible with Hadoop 2.x and Hadoop 3.x as well as a variety of components in the Hadoop ecosystems.
- S3-compatible: JuiceFS' S3 Gateway provides an S3-compatible interface.
- Cloud Native: A Kubernetes CSI Driver is provided for easily using JuiceFS in Kubernetes.
- Shareable: JuiceFS is a shared file storage that can be read and written by thousands of clients.
- Strong Consistency: The confirmed modification will be immediately visible on all the servers mounted with the same file system.
- Outstanding Performance: The latency can be as low as a few milliseconds, and the throughput can be expanded nearly unlimitedly (depending on the size of the object storage). Test results
- Data Encryption: Supports data encryption in transit and at rest (please refer to the guide for more information).
- Global File Locks: JuiceFS supports both BSD locks (flock) and POSIX record locks (fcntl).
- Data Compression: JuiceFS supports LZ4 or Zstandard to compress all your data.
Architecture | Getting Started | Advanced Topics | POSIX Compatibility | Performance Benchmark | Supported Object Storage | Who is using | Roadmap | Reporting Issues | Contributing | Community | Usage Tracking | License | Credits | FAQ
Architecture
JuiceFS consists of three parts:
- JuiceFS Client: Coordinates object storage and metadata storage engine as well as implementation of file system interfaces such as POSIX, Hadoop, Kubernetes, and S3 gateway.
- Data Storage: Stores data, with supports of a variety of data storage media, e.g., local disk, public or private cloud object storage, and HDFS.
- Metadata Engine: Stores the corresponding metadata that contains information of file name, file size, permission group, creation and modification time and directory structure, etc., with supports of different metadata engines, e.g., Redis, MySQL, SQLite and TiKV.
JuiceFS can store the metadata of file system on Redis, which is a fast, open-source, in-memory key-value data storage, particularly suitable for storing metadata; meanwhile, all the data will be stored in object storage through JuiceFS client. Learn more
Each file stored in JuiceFS is split into "Chunk" s at a fixed size with the default upper limit of 64 MiB. Each Chunk is composed of one or more "Slice"(s), and the length of the slice varies depending on how the file is written. Each slice is composed of size-fixed "Block" s, which are 4 MiB by default. These blocks will be stored in object storage in the end; at the same time, the metadata information of the file and its Chunks, Slices, and Blocks will be stored in metadata engines via JuiceFS. Learn more
When using JuiceFS, files will eventually be split into Chunks, Slices and Blocks and stored in object storage. Therefore, the source files stored in JuiceFS cannot be found in the file browser of the object storage platform; instead, there are only a chunks directory and a bunch of digitally numbered directories and files in the bucket. Don't panic! This is just the secret of the high-performance operation of JuiceFS!
Getting Started
Before you begin, make sure you have:
- Redis database for metadata storage
- Object storage for storing data blocks
- JuiceFS Client downloaded and installed
Please refer to Quick Start Guide to start using JuiceFS right away!
Command Reference
Check out all the command line options in command reference.
Containers
JuiceFS can be used as a persistent volume for Docker and Podman, please check here for details.
Kubernetes
It is also very easy to use JuiceFS on Kubernetes. Please find more information here.
Hadoop Java SDK
If you wanna use JuiceFS in Hadoop, check Hadoop Java SDK.
Advanced Topics
- Redis Best Practices
- How to Setup Object Storage
- Cache Management
- Fault Diagnosis and Analysis
- FUSE Mount Options
- Using JuiceFS on Windows
- S3 Gateway
Please refer to JuiceFS Document Center for more information.
POSIX Compatibility
JuiceFS has passed all of the compatibility tests (8813 in total) in the latest pjdfstest .
All tests successful.
Test Summary Report
-------------------
/root/soft/pjdfstest/tests/chown/00.t (Wstat: 0 Tests: 1323 Failed: 0)
TODO passed: 693, 697, 708-709, 714-715, 729, 733
Files=235, Tests=8813, 233 wallclock secs ( 2.77 usr 0.38 sys + 2.57 cusr 3.93 csys = 9.65 CPU)
Result: PASS
Aside from the POSIX features covered by pjdfstest, JuiceFS also provides:
- Close-to-open consistency. Once a file is written and closed, it is guaranteed to view the written data in the following open and read. Within the same mount point, all the written data can be read immediately.
- Rename and all other metadata operations are atomic, which are guaranteed by Redis transaction.
- Opened files remain accessible after unlink from same mount point.
- Mmap (tested with FSx).
- Fallocate with punch hole support.
- Extended attributes (xattr).
- BSD locks (flock).
- POSIX record locks (fcntl).
Performance Benchmark
Basic benchmark
JuiceFS provides a subcommand that can run a few basic benchmarks to help you understand how it works in your environment:
Throughput
A sequential read/write benchmark has also been performed on JuiceFS, EFS and S3FS by fio.
Above result figure shows that JuiceFS can provide 10X more throughput than the other two (see more details).
Metadata IOPS
A simple mdtest benchmark has been performed on JuiceFS, EFS and S3FS by mdtest.
The result shows that JuiceFS can provide significantly more metadata IOPS than the other two (see more details).
Analyze performance
See Real-Time Performance Monitoring if you encountered performance issues.
Supported Object Storage
- Amazon S3
- Google Cloud Storage
- Azure Blob Storage
- Alibaba Cloud Object Storage Service (OSS)
- Tencent Cloud Object Storage (COS)
- Qiniu Cloud Object Storage (Kodo)
- QingStor Object Storage
- Ceph RGW
- MinIO
- Local disk
- Redis
- ...
JuiceFS supports almost all object storage services. Learn more.
Who is using
JuiceFS is production ready and used by thousands of machines in production. A list of users has been assembled and documented here. In addition JuiceFS has several collaborative projects that integrate with other open source projects, which we have documented here. If you are also using JuiceFS, please feel free to let us know, and you are welcome to share your specific experience with everyone.
The storage format is stable, will be supported by all future releases.
Roadmap
- Support FoundationDB as metadata engine
- Directory quotas
- User and group quotas
- Snapshot
- Write once read many (WORM)
Reporting Issues
We use GitHub Issues to track community reported issues. You can also contact the community for any questions.
Contributing
Thank you for your contribution! Please refer to the JuiceFS Contributing Guide for more information.
Community
Welcome to join the Discussions and the Slack channel to connect with JuiceFS team members and other users.
Usage Tracking
JuiceFS collects anonymous usage data by default to help us better understand how the community is using JuiceFS. Only core metrics (e.g. version number) will be reported, and user data and any other sensitive data will not be included. The related code can be viewed here.
You could also disable reporting easily by command line option --no-usage-report
:
juicefs mount --no-usage-report
License
JuiceFS is open-sourced under Apache License 2.0, see LICENSE.
Credits
The design of JuiceFS was inspired by Google File System, HDFS and MooseFS. Thanks for their great work!
FAQ
Why doesn't JuiceFS support XXX object storage?
JuiceFS supports many object storage. Please check out this list first. If the object storage you want to use is compatible with S3, you could treat it as S3. Otherwise, try reporting issue.
Can I use Redis Cluster as metadata engine?
Yes. Since v1.0.0 Beta3 JuiceFS supports the use of Redis Cluster as the metadata engine, but it should be noted that Redis Cluster requires that the keys of all operations in a transaction must be in the same hash slot, so a JuiceFS file system can only use one hash slot.
See "Redis Best Practices" for more information.
What's the difference between JuiceFS and XXX?
See "Comparison with Others" for more information.
For more FAQs, please see the full list.