Awesome
Elastic
Elastic is an Elasticsearch client for the Go programming language.
See the wiki for additional information about Elastic.
Releases
Notice that the master branch always refers to the latest version of Elastic. If you want to use stable versions of Elastic, you should use the packages released via gopkg.in.
Here's the version matrix:
Elasticsearch version | Elastic version - | Package URL |
---|---|---|
2.x | 3.0 | gopkg.in/olivere/elastic.v3 (source doc) |
1.x | 2.0 | gopkg.in/olivere/elastic.v2 (source doc) |
0.9-1.3 | 1.0 | gopkg.in/olivere/elastic.v1 (source doc) |
Example:
You have Elasticsearch 1.7.3 installed and want to use Elastic. As listed above, you should use Elastic 2.0. So you first install Elastic 2.0.
$ go get gopkg.in/olivere/elastic.v2
Then you use it via the following import path:
import "gopkg.in/olivere/elastic.v2"
Elastic 3.0
Elastic 3.0 targets Elasticsearch 2.0 and later. Elasticsearch 2.0.0 was released on 28th October 2015.
Notice that there are a lot of breaking changes in Elasticsearch 2.0 and we used this as an opportunity to clean up and refactor Elastic as well.
Elastic 2.0
Elastic 2.0 targets Elasticsearch 1.x and published via gopkg.in/olivere/elastic.v2
.
Elastic 1.0
Elastic 1.0 is deprecated. You should really update Elasticsearch and Elastic to a recent version.
However, if you cannot update for some reason, don't worry. Version 1.0 is still available. All you need to do is go-get it and change your import path as described above.
Status
We use Elastic in production since 2012. Although Elastic is quite stable from our experience, we don't have a stable API yet. The reason for this is that Elasticsearch changes quite often and at a fast pace. At this moment we focus on features, not on a stable API.
Having said that, there have been no big API changes that required you to rewrite your application big time. More often than not it's renaming APIs and adding/removing features so that we are in sync with the Elasticsearch API.
Elastic has been used in production with the following Elasticsearch versions: 0.90, 1.0-1.7. Furthermore, we use Travis CI to test Elastic with the most recent versions of Elasticsearch and Go. See the .travis.yml file for the exact matrix and Travis for the results.
Elasticsearch has quite a few features. A lot of them are not yet implemented in Elastic (see below for details). I add features and APIs as required. It's straightforward to implement missing pieces. I'm accepting pull requests :-)
Having said that, I hope you find the project useful.
Usage
The first thing you do is to create a Client. The client connects to Elasticsearch on http://127.0.0.1:9200 by default.
You typically create one client for your app. Here's a complete example.
// Create a client
client, err := elastic.NewClient()
if err != nil {
// Handle error
}
// Create an index
_, err = client.CreateIndex("twitter").Do()
if err != nil {
// Handle error
panic(err)
}
// Add a document to the index
tweet := Tweet{User: "olivere", Message: "Take Five"}
_, err = client.Index().
Index("twitter").
Type("tweet").
Id("1").
BodyJson(tweet).
Do()
if err != nil {
// Handle error
panic(err)
}
// Search with a term query
termQuery := elastic.NewTermQuery("user", "olivere")
searchResult, err := client.Search().
Index("twitter"). // search in index "twitter"
Query(&termQuery). // specify the query
Sort("user", true). // sort by "user" field, ascending
From(0).Size(10). // take documents 0-9
Pretty(true). // pretty print request and response JSON
Do() // execute
if err != nil {
// Handle error
panic(err)
}
// searchResult is of type SearchResult and returns hits, suggestions,
// and all kinds of other information from Elasticsearch.
fmt.Printf("Query took %d milliseconds\n", searchResult.TookInMillis)
// Each is a convenience function that iterates over hits in a search result.
// It makes sure you don't need to check for nil values in the response.
// However, it ignores errors in serialization. If you want full control
// over iterating the hits, see below.
var ttyp Tweet
for _, item := range searchResult.Each(reflect.TypeOf(ttyp)) {
if t, ok := item.(Tweet); ok {
fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
}
}
// TotalHits is another convenience function that works even when something goes wrong.
fmt.Printf("Found a total of %d tweets\n", searchResult.TotalHits())
// Here's how you iterate through results with full control over each step.
if searchResult.Hits != nil {
fmt.Printf("Found a total of %d tweets\n", searchResult.Hits.TotalHits)
// Iterate through results
for _, hit := range searchResult.Hits.Hits {
// hit.Index contains the name of the index
// Deserialize hit.Source into a Tweet (could also be just a map[string]interface{}).
var t Tweet
err := json.Unmarshal(*hit.Source, &t)
if err != nil {
// Deserialization failed
}
// Work with tweet
fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
}
} else {
// No hits
fmt.Print("Found no tweets\n")
}
// Delete the index again
_, err = client.DeleteIndex("twitter").Do()
if err != nil {
// Handle error
panic(err)
}
See the wiki for more details.
API Status
Here's the current API status.
APIs
- Search (most queries, filters, facets, aggregations etc. are implemented: see below)
- Index
- Get
- Delete
- Delete By Query
- Update
- Multi Get
- Bulk
- Bulk UDP
- Term vectors
- Multi term vectors
- Count
- Validate
- Explain
- Search
- Search shards
- Search template
- Facets (most are implemented, see below)
- Aggregates (most are implemented, see below)
- Multi Search
- Percolate
- More like this
- Benchmark
Indices
- Create index
- Delete index
- Get index
- Indices exists
- Open/close index
- Put mapping
- Get mapping
- Get field mapping
- Types exist
- Delete mapping
- Index aliases
- Update indices settings
- Get settings
- Analyze
- Index templates
- Warmers
- Status
- Indices stats
- Indices segments
- Indices recovery
- Clear cache
- Flush
- Refresh
- Optimize
- Upgrade
Snapshot and Restore
- Snapshot
- Restore
- Snapshot status
- Monitoring snapshot/restore progress
- Partial restore
Cat APIs
Not implemented. Those are better suited for operating with Elasticsearch on the command line.
Cluster
- Health
- State
- Stats
- Pending cluster tasks
- Cluster reroute
- Cluster update settings
- Nodes stats
- Nodes info
- Nodes hot_threads
- Nodes shutdown
Search
- Inner hits (for ES >= 1.5.0; see docs)
Query DSL
Queries
-
match
-
multi_match
-
bool
-
boosting
-
common_terms
-
constant_score
-
dis_max
-
filtered
-
fuzzy_like_this_query
(flt
) -
fuzzy_like_this_field_query
(flt_field
) -
function_score
-
fuzzy
-
geo_shape
-
has_child
-
has_parent
-
ids
-
indices
-
match_all
-
mlt
-
mlt_field
-
nested
-
prefix
-
query_string
-
simple_query_string
-
range
-
regexp
-
span_first
-
span_multi_term
-
span_near
-
span_not
-
span_or
-
span_term
-
term
-
terms
-
top_children
-
wildcard
-
minimum_should_match
-
multi_term_query_rewrite
-
template_query
Filters
-
and
-
bool
-
exists
-
geo_bounding_box
-
geo_distance
-
geo_distance_range
-
geo_polygon
-
geoshape
-
geohash
-
has_child
-
has_parent
-
ids
-
indices
-
limit
-
match_all
-
missing
-
nested
-
not
-
or
-
prefix
-
query
-
range
-
regexp
-
script
-
term
-
terms
-
type
Facets
- Terms
- Range
- Histogram
- Date Histogram
- Filter
- Query
- Statistical
- Terms Stats
- Geo Distance
Aggregations
- min
- max
- sum
- avg
- stats
- extended stats
- value count
- percentiles
- percentile ranks
- cardinality
- geo bounds
- top hits
- scripted metric
- global
- filter
- filters
- missing
- nested
- reverse nested
- children
- terms
- significant terms
- range
- date range
- ipv4 range
- histogram
- date histogram
- geo distance
- geohash grid
Sorting
- Sort by score
- Sort by field
- Sort by geo distance
- Sort by script
Scan
Scrolling through documents (e.g. search_type=scan
) are implemented via
the Scroll
and Scan
services. The ClearScroll
API is implemented as well.
How to contribute
Read the contribution guidelines.
Credits
Thanks a lot for the great folks working hard on Elasticsearch and Go.
LICENSE
MIT-LICENSE. See LICENSE or the LICENSE file provided in the repository for details.