Awesome
Bottler (BETA)
Abandoned !
This project has not been used by me in production for months and I don't expect to be able to dedicate time to it for a while. It was never meant to be more than my own tools, just in the open. So no big deal.
As always, feel free to fork it and go on with it if you find it useful for you!
Bottler is a collection of tools that aims to help you generate releases, ship them to your servers, install them there, and get them live on production.
What
Several tools that can be used separately:
- release: generate
tar.gz
files with your app and its dependencies (not including the wholeerts
by now). - ship: ship your generated
tar.gz
viascp
to every server you configure. - install: properly install your shipped release on each of those servers.
- restart: fire a quick restart to apply the newly installed release if you are using Harakiri.
- green_flag: wait for the deployed application to signal it's working.
- deploy: release, ship, install, restart, and then wait for green_flag.
- rollback: quick restart on a previous release.
- observer: opens an observer window connected to given server.
- exec: runs given command on every server, showing their outputs.
- goto: opens an SSH session with a server on a new terminal window.
You should have public key ssh access to all servers you intend to work with. Erlang runtime should be installed there too. Everything else, including Elixir itself, is included in the release.
By now it's not able to deal with all the hot code swap bolts, screws and nuts. Someday will be.
Alternative to...
Initially it was an alternative to exrm, due to its lack of some features I love.
Recently, after creating and using bottler on several projects for some months, I discovered edeliver and it looks great! When I have time I will read carefully its code and play differences with bottler, maybe borrow some ideas.
Looking forward to distillery too. The plan is to use it to generate the releases.
Use
Add to your deps
like this:
{:bottler, " >= 0.5.0"}
Or if you want to take a walk on the wild side:
{:bottler, github: "rubencaro/bottler"}
On your config:
config :bottler, :params, [servers: [server1: [ip: "1.1.1.1"],
server2: [ip: "1.1.1.2"]],
remote_user: "produser",
rsa_pass_phrase: "passphrase",
cookie: "secretcookie",
max_processes: 262144,
additional_folders: ["docs"],
ship: [timeout: 60_000,
method: :scp],
green_flag: [timeout: 30_000],
goto: [terminal: "terminator -T '<%= title %>' -e '<%= command %>'"]
forced_branch: "master",
hooks: [pre_release: %{command: "whatever", continue_on_fail: false}]]
servers
- list of servers to deploy on.remote_user
- user name to log in.rsa_pass_phrase
- pass phrase for your SSH keys (We recommend not to put it on plain text here.System.get_env("RSA_PASS_PHRASE")
would do.).cookie
- distributed Erlang cookie.max_processes
- maximum number of processes allowed on ErlangVM (see here). Defaults to262144
.additional_folders
- additional folders to include in the release under thelib
folder.ship
- options for theship
tasktimeout
- timeout millis for shipment through scp, defaults to 60_000method
- method of shipment, one of (:scp
,:remote_scp
, etc..)
green_flag
- options for thegreen_flag
tasktimeout
- timeout millis waiting for green flags, defaults to 30_000
goto
- options for thegoto
taskterminal
- template for the actual terminal command
forced_branch
- only allow executing dangerous tasks when local git is on given branchhooks
- hooks to run external commands on interesting moments
Then you can use the tasks like mix bottler.release
. Take a look at the docs for each task with mix help <task>
.
prod
environment is used by default. Use like MIX_ENV=other_env mix bottler.taskname
to force it to other_env
.
You may also want to add <project>/rel
and <project>/.bottler
to your .gitignore
if you don't want every generated file, including release .tar.gz
, get into your repo.
Release
Build a release file. Use like mix bottler.release
.
Any script (or EEx
template) on a lib/scripts folder
will be included into the release package. The install
task also links that folder directly from the current release, so you can see your scripts on production inside $HOME/<project>/current/scripts
. The contents of the folder will be merged with the own bottler
lib/scripts
folder. Take a look at it for examples ( https://github.com/rubencaro/bottler/tree/master/lib/scripts ).
Ship
Ship a release file to configured remote servers.
Use like mix bottler.ship
.
You can configure some things about it, under the ship section:
- timeout: The timeout that applies to the upload process.
- method: One of:
- scp: Straight scp from the local machine to every target server.
- remote_scp: Upload the release only once from your local machine to the first configured server, and then scp remotely to every other target.
Install
Install a shipped file on configured remote servers.
Use like mix bottler.install
.
Restart
Touch tmp/restart
on configured remote servers.
That expects to have Harakiri
or similar software reacting to that.
Use like mix bottler.restart
.
Alive Loop
Tipically implemented on production like this:
@doc """
Tell the world outside we are alive
"""
def alive_loop(opts \\ []) do
# register the name if asked
if opts[:name], do: Process.register(self,opts[:name])
:timer.sleep 5_000
tmp_path = Application.get_env(:myapp, :tmp_path) |> Path.expand
{_, _, version} = Application.started_applications |> Enum.find(&(match?({:myapp, _, _}, &1)))
:os.cmd 'echo \'#{version}\' > #{tmp_path}/alive'
alive_loop
end
And run by a Task
on the supervision tree like this:
worker(Task, [MyApp, :alive_loop, [[name: MyApp.AliveLoop]]])
It touches the tmp/alive
file every ~5 seconds, so anyone outside of the ErlangVM can tell if the app is actually running.
Watchdog script for crontab
Among the generated scripts, put by the deploy task inside $HOME/<project>/current/scripts
, there's a watchdog.sh
meant to be run by cron
.
That script checks the mtime of the tmp/alive
file to ensure that it's younger than 60 seconds. If it's not, then it starts the application. If the application is running, the watchdog script will not even try to start it again.
Green Flag Test
A task to wait until the contents of tmp/alive
file matches the version of the current
release, or the given timeout is reached.
Use like mix bottler.green_flag
.
If you have special needs with the start of your application, such as to wait for some cache to fill or some connections to be made, then you just have to control the actual value that is written on the alive
file. It has to match the new version only when everything is ready to work. You can use an Agent
like:
@doc """
Tell the world outside we are alive
"""
def alive_loop(opts \\ []) do
#...
version = Agent.get(:version_holder, &(&1))
#...
end
Deploy
Build a release file, ship it to remote servers, install it, and restart the app. Then it waits for the green flag test. No hot code swap for now.
Use like mix deploy
.
Rollback
Simply move the current link to the previous release and restart to apply. It's also possible to deploy a previous release, but this is quite faster.
Be careful because the previous release may be different on each server. It's up to you to keep all your servers rollback-able (yeah).
Use like mix bottler.rollback
.
Observer
Use like mix observer server1
It takes the ip of the given server from configuration, then opens a double SSH tunnel with its epmd service and its application node. Then executes an elixir script which spawns an observer window locally, connected with the tunnelled node. You just need to select the remote node from the Nodes menu.
Exec
Use like mix bottler.exec 'ls -alt some/path'
It runs the given command through parallel SSH connections with all the configured servers. It accepts an optional --timeout parameter.
Goto
Use like mix goto server1
It opens an SSH session on a new terminal window on the server with given name. The actual terminal
command can be configured as a template.
GCE support
Whenever you can use Google's gcloud
from your computer (i.e. authenticate and see if it works), you can configure bottler
to use it too to get your instances IP addresses. Instead of:
servers: [server1: [ip: "1.1.1.1"],
server2: [ip: "1.1.1.2"]]
You just do:
servers: [gce_project: "project-id", match: "regexstr"]
When you perform an operation on a server, its ip will be obtained using gcloud
command. You don't need to reserve more static IP addresses for your instances.
Optionally you can give a match
regex string to default filter server names given by gcloud. Just the same you would give to the --servers
switch of the tasks. This filter will be added to the one given at the commandline switch. I.e. if you configure match
and then pass --servers
, then only servers with a name that matches both regexes will pass.
Hooks
You can configure hooks to be run at several points of the process. To define a hook you must add it to your configuration like this:
hooks: [hook_point_name: %{command: "whatever", continue_on_fail: false}],
continue_on_fail
marks the behaviour of bottler when the return code of given command is not zero. When continue_on_fail
is true
, bottler will continue with the normal execution. Otherwise it will halt.
Supported hook points are:
- pre_release: executed right before the release task
TODOs
- Use distillery
- Add more testing
- Separate section for documenting every configuration option
- Get it stable on production
- Complete README
- Rollback to any previous version
- Add support for deploy to AWS instances **
Changelog
master
- Add pre-release hook
- Support for hooks
- Remove 1.4 warnings
- Configurable
max_processes
- Log using server names
- Fix some
scp
glitches when shipping between servers - Support for
Regex
on server names - Green flag support.
- Support for forced release branch
- Log guessed server ips
- Options to filter target servers from command line
- Resolve server ips only once
- Add support for deploy to GCE instances
- remove
helper_scripts
task goto
task- Use SSHEx 2.1.0
- Cookie support
- configurable shipment timeout
erl_connect
(no Elixir needed on target)observer
taskbottler.exec
taskremote_scp
shipment support- log erts versions on both sides
0.5.0
- Use new SSHEx 1.1.0
0.4.1
- Fix
:ssh
sometimes not started on install.
0.4.0
- Use SSHEx
- Add helper_scripts
0.3.0
- Individual tasks for each step
- Add connect script
- Add fast rollback
- Few README improvements
0.2.0
- First package released