Home

Awesome

Native model

Crates.io Build Test Release Documentation License

Add interoperability on the top of serialization formats like bincode, postcard etc.

See concepts for more details.

Goals

Usage

       Application 1 (DotV1)        Application 2 (DotV1 and DotV2)
                |                                  |
   Encode DotV1 |--------------------------------> | Decode DotV1 to DotV2
                |                                  | Modify DotV2
   Decode DotV1 | <--------------------------------| Encode DotV2 back to DotV1
                |                                  |
// Application 1
let dot = DotV1(1, 2);
let bytes = native_model::encode(&dot).unwrap();

// Application 1 sends bytes to Application 2.

// Application 2
// We are able to decode the bytes directly into a new type DotV2 (upgrade).
let (mut dot, source_version) = native_model::decode::<DotV2>(bytes).unwrap();
assert_eq!(dot, DotV2 { 
    name: "".to_string(), 
    x: 1, 
    y: 2 
});
dot.name = "Dot".to_string();
dot.x = 5;
// For interoperability, we encode the data with the version compatible with Application 1 (downgrade).
let bytes = native_model::encode_downgrade(dot, source_version).unwrap();

// Application 2 sends bytes to Application 1.

// Application 1
let (dot, _) = native_model::decode::<DotV1>(bytes).unwrap();
assert_eq!(dot, DotV1(5, 2));

Serialization format

You can use default serialization formats via the feature flags, like:

[dependencies]
native_model = { version = "0.1", features = ["bincode_2_rc"] }

Each feature flag corresponds to a specific minor version of the serialization format. In order to avoid breaking changes, the default serialization format is the oldest one.

Custom serialization format

Define a struct with the name you want. This struct must implement native_model::Encode and native_model::Decode traits.

Full examples:

Others examples, see the default implementations:

Data model

Define your model using the macro native_model.

Attributes:

use native_model::native_model;

#[derive(Deserialize, Serialize, PartialEq, Debug)]
#[native_model(id = 1, version = 1)]
struct DotV1(u32, u32);

#[derive(Deserialize, Serialize, PartialEq, Debug)]
#[native_model(id = 1, version = 2, from = DotV1)]
struct DotV2 {
    name: String,
    x: u64,
    y: u64,
}

// Implement the conversion between versions From<DotV1> for DotV2 and From<DotV2> for DotV1.

#[derive(Deserialize, Serialize, PartialEq, Debug)]
#[native_model(id = 1, version = 3, try_from = (DotV2, anyhow::Error))]
struct DotV3 {
    name: String,
    cord: Cord,
}

#[derive(Deserialize, Serialize, PartialEq, Debug)]
struct Cord {
    x: u64,
    y: u64,
}

// Implement the conversion between versions From<DotV2> for DotV3 and From<DotV3> for DotV2.

Status

Early development. Not ready for production.

Concepts

In order to understand how the native model works, you need to understand the following concepts.

Under the hood, the native model is a thin wrapper around serialized data. The id and the version are twice encoded with a little_endian::U32. That represents 8 bytes, that are added at the beginning of the data.

+------------------+------------------+------------------------------------+
|     ID (4 bytes) | Version (4 bytes)| Data (indeterminate-length bytes)  |
+------------------+------------------+------------------------------------+

Full example here.

Performance

Native model has been designed to have a minimal and constant overhead. That means that the overhead is the same whatever the size of the data. Under the hood we use the zerocopy crate to avoid unnecessary copies.

👉 To know the total time of the encode/decode, you need to add the time of your serialization format.

Resume:

data sizeencode time (ns)decode time (ps)
1 B19.769 ns - 20.154 ns40.526 ps - 40.617 ps
1 KiB19.597 ns - 19.971 ns40.534 ps - 40.633 ps
1 MiB19.662 ns - 19.910 ns40.508 ps - 40.632 ps
10 MiB19.591 ns - 19.980 ns40.504 ps - 40.605 ps
100 MiB19.669 ns - 19.867 ns40.520 ps - 40.644 ps

Benchmark of the native model overhead here.