Home

Awesome

go-ycsb

go-ycsb is a Go port of YCSB. It fully supports all YCSB generators and the Core workload so we can do the basic CRUD benchmarks with Go.

Why another Go YCSB?

Getting Started

Download

https://github.com/pingcap/go-ycsb/releases/latest

Linux

wget -c https://github.com/pingcap/go-ycsb/releases/latest/download/go-ycsb-linux-amd64.tar.gz -O - | tar -xz

# give it a try
./go-ycsb --help

OSX

wget -c https://github.com/pingcap/go-ycsb/releases/latest/download/go-ycsb-darwin-amd64.tar.gz -O - | tar -xz

# give it a try
./go-ycsb --help

Building from source

git clone https://github.com/pingcap/go-ycsb.git
cd go-ycsb
make

# give it a try
./bin/go-ycsb  --help

Notice:

Usage

Mostly, we can start from the official document Running-a-Workload.

Shell

./bin/go-ycsb shell basic
» help
YCSB shell command

Usage:
  shell [command]

Available Commands:
  delete      Delete a record
  help        Help about any command
  insert      Insert a record
  read        Read a record
  scan        Scan starting at key
  table       Get or [set] the name of the table
  update      Update a record

Load

./bin/go-ycsb load basic -P workloads/workloada

Run

./bin/go-ycsb run basic -P workloads/workloada

Supported Database

Output configuration

fielddefault valuedescription
measurementtype"histogram"The mechanism for recording measurements, one of histogram, raw or csv
measurement.output_file""File to write output to, default writes to stdout

Database Configuration

You can pass the database configurations through -p field=value in the command line directly.

Common configurations:

fielddefault valuedescription
dropdatafalseWhether to remove all data before test
verbosefalseOutput the execution query
debug.pprof":6060"Go debug profile address

MySQL & TiDB

fielddefault valuedescription
mysql.host"127.0.0.1"MySQL Host
mysql.port3306MySQL Port
mysql.user"root"MySQL User
mysql.passwordMySQL Password
mysql.db"test"MySQL Database
tidb.cluster_indextrueWhether to use cluster index, for TiDB only
tidb.instances""Comma-seperated address list of tidb instances (eg: tidb-0:4000,tidb-1:4000)

TiKV

fielddefault valuedescription
tikv.pd"127.0.0.1:2379"PD endpoints, seperated by comma
tikv.type"raw"TiKV mode, "raw", "txn", or "coprocessor"
tikv.conncount128gRPC connection count
tikv.batchsize128Request batch size
tikv.async_committrueEnalbe async commit or not
tikv.one_pctrueEnable one phase or not
tikv.apiversion"V1"api-version of tikv server, "V1" or "V2"

FoundationDB

fielddefault valuedescription
fdb.cluster""The cluster file used for FoundationDB, if not set, will use the default
fdb.dbname"DB"The cluster database name
fdb.apiversion510API version, now only 5.1 is supported

PostgreSQL & CockroachDB & AlloyDB & Yugabyte

fielddefault valuedescription
pg.host"127.0.0.1"PostgreSQL Host
pg.port5432PostgreSQL Port
pg.user"root"PostgreSQL User
pg.passwordPostgreSQL Password
pg.db"test"PostgreSQL Database
pg.sslmode"disablePostgreSQL ssl mode

Aerospike

fielddefault valuedescription
aerospike.host"localhost"The port of the Aerospike service
aerospike.port3000The port of the Aerospike service
aerospike.ns"test"The namespace to use

Badger

fielddefault valuedescription
badger.dir"/tmp/badger"The directory to save data
badger.valuedir"/tmp/badger"The directory to save value, if not set, use badger.dir
badger.sync_writesfalseSync all writes to disk
badger.num_versions_to_keep1How many versions to keep per key
badger.max_table_size64MBEach table (or file) is at most this size
badger.level_size_multiplier10Equals SizeOf(Li+1)/SizeOf(Li)
badger.max_levels7Maximum number of levels of compaction
badger.value_threshold32If value size >= this threshold, only store value offsets in tree
badger.num_memtables5Maximum number of tables to keep in memory, before stalling
badger.num_level0_tables5Maximum number of Level 0 tables before we start compacting
badger.num_level0_tables_stall10If we hit this number of Level 0 tables, we will stall until L0 is compacted away
badger.level_one_size256MBMaximum total size for L1
badger.value_log_file_size1GBSize of single value log file
badger.value_log_max_entries1000000Max number of entries a value log file can hold (approximately). A value log file would be determined by the smaller of its file size and max entries
badger.num_compactors3Number of compaction workers to run concurrently
badger.do_not_compactfalseStops LSM tree from compactions
badger.table_loading_modeoptions.LoadToRAMHow should LSM tree be accessed
badger.value_log_loading_modeoptions.MemoryMapHow should value log be accessed

RocksDB

fielddefault valuedescription
rocksdb.dir"/tmp/rocksdb"The directory to save data
rocksdb.allow_concurrent_memtable_writestrueSets whether to allow concurrent memtable writes
rocksdb.allow_mmap_readsfalseEnable/Disable mmap reads for reading sst tables
rocksdb.allow_mmap_writesfalseEnable/Disable mmap writes for writing sst tables
rocksdb.arena_block_size0(write_buffer_size / 8)Sets the size of one block in arena memory allocation
rocksdb.db_write_buffer_size0(disable)Sets the amount of data to build up in memtables across all column families before writing to disk
rocksdb.hard_pending_compaction_bytes_limit256GBSets the bytes threshold at which all writes are stopped if estimated bytes needed to be compaction exceed this threshold
rocksdb.level0_file_num_compaction_trigger4Sets the number of files to trigger level-0 compaction
rocksdb.level0_slowdown_writes_trigger20Sets the soft limit on number of level-0 files
rocksdb.level0_stop_writes_trigger36Sets the maximum number of level-0 files. We stop writes at this point
rocksdb.max_bytes_for_level_base256MBSets the maximum total data size for base level
rocksdb.max_bytes_for_level_multiplier10Sets the max Bytes for level multiplier
rocksdb.max_total_wal_size0([sum of all write_buffer_size * max_write_buffer_number] * 4)Sets the maximum total wal size in bytes. Once write-ahead logs exceed this size, we will start forcing the flush of column families whose memtables are backed by the oldest live WAL file (i.e. the ones that are causing all the space amplification)
rocksdb.memtable_huge_page_size0Sets the page size for huge page for arena used by the memtable
rocksdb.num_levels7Sets the number of levels for this database
rocksdb.use_direct_readsfalseEnable/Disable direct I/O mode (O_DIRECT) for reads
rocksdb.use_fsyncfalseEnable/Disable fsync
rocksdb.write_buffer_size64MBSets the amount of data to build up in memory (backed by an unsorted log on disk) before converting to a sorted on-disk file
rocksdb.max_write_buffer_number2Sets the maximum number of write buffers that are built up in memory
rocksdb.max_background_jobs2Sets maximum number of concurrent background jobs (compactions and flushes)
rocksdb.block_size4KBSets the approximate size of user data packed per block. Note that the block size specified here corresponds opts uncompressed data. The actual size of the unit read from disk may be smaller if compression is enabled
rocksdb.block_size_deviation10Sets the block size deviation. This is used opts close a block before it reaches the configured 'block_size'. If the percentage of free space in the current block is less than this specified number and adding a new record opts the block will exceed the configured block size, then this block will be closed and the new record will be written opts the next block
rocksdb.cache_index_and_filter_blocksfalseIndicating if we'd put index/filter blocks to the block cache. If not specified, each "table reader" object will pre-load index/filter block during table initialization
rocksdb.no_block_cachefalseSpecify whether block cache should be used or not
rocksdb.pin_l0_filter_and_index_blocks_in_cachefalseSets cache_index_and_filter_blocks. If is true and the below is true (hash_index_allow_collision), then filter and index blocks are stored in the cache, but a reference is held in the "table reader" object so the blocks are pinned and only evicted from cache when the table reader is freed
rocksdb.whole_key_filteringtrueSpecify if whole keys in the filter (not just prefixes) should be placed. This must generally be true for gets opts be efficient
rocksdb.block_restart_interval16Sets the number of keys between restart points for delta encoding of keys. This parameter can be changed dynamically
rocksdb.filter_policynilSets the filter policy opts reduce disk reads. Many applications will benefit from passing the result of NewBloomFilterPolicy() here
rocksdb.index_typekBinarySearchSets the index type used for this table. kBinarySearch: A space efficient index block that is optimized for binary-search-based index. kHashSearch: The hash index, if enabled, will do the hash lookup when Options.prefix_extractor is provided. kTwoLevelIndexSearch: A two-level index implementation. Both levels are binary search indexes
rocksdb.block_alignfalseEnable/Disable align data blocks on lesser of page size and block size

Spanner

fielddefault valuedescription
spanner.db""Spanner Database
spanner.credentials"~/.spanner/credentials.json"Google application credentials for Spanner

Sqlite

fielddefault valuedescription
sqlite.db"/tmp/sqlite.db"Database path
sqlite.mode"rwc"Open Mode: ro, rc, rwc, memory
sqlite.journalmode"DELETE"Journal mode: DELETE, TRUNCSTE, PERSIST, MEMORY, WAL, OFF
sqlite.cache"Shared"Cache: shared, private

Cassandra

fielddefault valuedescription
cassandra.cluster"127.0.0.1:9042"Cassandra cluster
cassandra.keyspace"test"Keyspace
cassandra.connections2Number of connections per host
cassandra.usernamecassandraUsername
cassandra.passwordcassandraPassword

MongoDB

fielddefault valuedescription
mongodb.url"mongodb://127.0.0.1:27017"MongoDB URI
mongodb.tls_skip_verifyfalseEnable/disable server ca certificate verification
mongodb.tls_ca_file""Path to mongodb server ca certificate file
mongodb.namespace"ycsb.ycsb"Namespace to use
mongodb.authdb"admin"Authentication database
mongodb.usernameN/AUsername for authentication
mongodb.passwordN/APassword for authentication

Redis

fielddefault valuedescription
redis.datatypehash"hash", "string" or "json" ("json" requires RedisJSON available)
redis.modesingle"single" or "cluster"
redis.networktcp"tcp" or "unix"
redis.addrRedis server address(es) in "host:port" form, can be semi-colon ; separated in cluster mode
redis.usernameRedis server username
redis.passwordRedis server password
redis.db0Redis server target db
redis.max_redirects0The maximum number of retries before giving up (only for cluster mode)
redis.read_onlyfalseEnables read-only commands on slave nodes (only for cluster mode)
redis.route_by_latencyfalseAllows routing read-only commands to the closest master or slave node (only for cluster mode)
redis.route_randomlyfalseAllows routing read-only commands to the random master or slave node (only for cluster mode)
redis.max_retriesMax retries before giving up connection
redis.min_retry_backoff8msMinimum backoff between each retry
redis.max_retry_backoff512msMaximum backoff between each retry
redis.dial_timeout5sDial timeout for establishing new connection
redis.read_timeout3sTimeout for socket reads
redis.write_timeout3sTimeout for socket writes
redis.pool_size10Maximum number of socket connections
redis.min_idle_conns0Minimum number of idle connections
redis.max_idle_conns0Maximum number of idle connections. If <= 0, connections are not closed due to a connection's idle time.
redis.max_conn_age0Connection age at which client closes the connection
redis.pool_timeout4sAmount of time client waits for connections are busy before returning an error
redis.idle_timeout5mAmount of time after which client closes idle connections. Should be less than server timeout
redis.idle_check_frequency1mFrequency of idle checks made by idle connections reaper. Deprecated in favour of redis.max_idle_conns
redis.tls_caPath to CA file
redis.tls_certPath to cert file
redis.tls_keyPath to key file
redis.tls_insecure_skip_verifyfalseControls whether a client verifies the server's certificate chain and host name

BoltDB

fielddefault valuedescription
bolt.path"/tmp/boltdb"The database file path. If the file does not exists then it will be created automatically
bolt.timeout0The amount of time to wait to obtain a file lock. When set to zero it will wait indefinitely. This option is only available on Darwin and Linux
bolt.no_grow_syncfalseSets DB.NoGrowSync flag before memory mapping the file
bolt.read_onlyfalseOpen the database in read-only mode
bolt.mmap_flags0Set the DB.MmapFlags flag before memory mapping the file
bolt.initial_mmap_size0The initial mmap size of the database in bytes. If <= 0, the initial map size is 0. If the size is smaller than the previous database, it takes no effect

etcd

fielddefault valuedescription
etcd.endpoints"localhost:2379"The etcd endpoint(s), multiple endpoints can be passed separated by comma.
etcd.dial_timeout"2s"The dial timeout duration passed into the client config.
etcd.cert_file""When using secure etcd, this should point to the crt file.
etcd.key_file""When using secure etcd, this should point to the pem file.
etcd.cacert_file""When using secure etcd, this should point to the ca file.
etcd.serializable_readsfalseWhether to use serializable reads.

DynamoDB

fielddefault valuedescription
dynamodb.tablename"ycsb"The database tablename
dynamodb.primarykey"_key"The table primary key fieldname
dynamodb.rc.units10Read request units throughput
dynamodb.wc.units10Write request units throughput
dynamodb.ensure.clean.tabletrueOn load mode ensure that the table is clean at the begining. In case of true and if the table previously exists it will be deleted and recreated
dynamodb.endpoint""Used endpoint for connection. If empty will use the default loaded configs
dynamodb.region""Used region for connection ( should match endpoint ). If empty will use the default loaded configs
dynamodb.consistent.readsfalseReads on DynamoDB provide an eventually consistent read by default. If your benchmark/use-case requires a strongly consistent read, set this option to true
dynamodb.delete.after.run.stagefalseDetele the database table after the run stage

TODO