Introduction
This documentation aims at developers that want to get more information about the underlying concepts and how to build and run BDeploy locally.
For people wanting to use BDeploy please refer to the end user documentation that is provided on our homepage https://bdeploy.io
BHive
The BHive (short: hive) provides a mechanism to store and transfer files over the network. It decouples storage of file content and hierarchical filesystem layout descriptions (trees). The mechanism is similar to how GIT stores files. File content is stored and identified by using the content’s checksum.
Here some properties of BHive:
-
The BHive uses a Merkle Hash Tree to store data much like GIT does (similar technology is also used for implementing blockchains).
-
Consistency of objects can be verified by calculating the checksum of the content and comparing it with the name of the file storing that content.
-
BHive knows four different types of objects:
BLOB
(any file content),TREE
(a list ofBLOB
,TREE
,MREF
entries),MREF
(a reference to a to-be-nestedMANIFEST
),MANIFEST
(associates some meta-data with a rootTREE
). -
All known object types (except
MANIFEST
) are stored using the exact same mechanism (ObjectDatabase
). This means verifying the consistency of whole trees is as simple as verifying consistency of individual files. -
MANIFEST
are stored in a separateManifestDatabase
, this is to allow named storage and reference toMANIFEST
, since they are the top-level entry point into the BHive. Most high level operations will requireManifest.Key
objects as parameters. -
BHive has a command line tooling to allow direct manipulation of local and remote hives:
io.bdeploy.bhive.cli.BHiveCli
-
BHive has a standalone server (using the above cli) which allows serving BHives without BDeploy.
-
A
io.bdeploy.bhive.BHive
does not expose API, useHive.Operation
defined in theio.bdeploy.bhive.op
package to programmatically interact with a hive. -
Hive.Operation
can be implemented externally, to provide new functionality operating on a BHive from the outside.
BHive Tour
This chapter, along with the launch configuration available in the bhive project, will give a short tour around most of the functionality that bhive provides.
All launch configurations used will use a BHive target in a runtime workspace in the containing workspace (${workspace_loc}/runtime-hive
) in case you want to have a look at it.
Import
Import is the act of digesting a folder recursively into a BHive storage. There are two important parameters: the source folder and the destination BHive. Since BHive creates an empty hive automatically in the destination, the target BHive directory is allowed to be non-existant. The source folder is not required to follow any rules at this low-level stage. BHive will simply digest any file found recursively. While doing so, it separates each files content from it’s name. They are stored in separate locations (content as 'blob', name is stored in a 'tree' as 'pointer' to the 'blob' for this file’s content).
Use the Hive-Import launch configuration to import a directory into the BHive. The launch will prompt for two things:
-
The folder to import. You can choose any folder to import. The target hive is created automatically if it does not exists.
-
The name (manifest key) of the imported tree. This name can be used to reference the imported tree later on. Note that a name:tag combination must be unique. Import will fail if the manifest key is already used.
Note that all BHive objects are immutable, meaning that there are never changes to any existing file. This is analogous to GIT. Manifests are not BHive objects but more special (they just reference a root tree BHive object). They are mutable in that labels can be added and removed from the manifest.
List
You can list the content of the BHive after importing by running the Hive-Manifest-List launch configuration. It will simply list all available manifests in the BHive.
Export
The inverse operation of an Import is (surprise) an Export, which means restoring a file/folder tree exactly as it was imported. The export will scan a manifest for 'tree' objects and write all 'blob' objects to the locations described there.
Run the Hive-Export launch configuration and specify a target folder (which may not yet exist). Next specify a manifest to export, use the key you specified during Import.
Check the directory, you will notice that all files have been written back on disc, and the folder content is equal to the one imported.
Remote Serve
BHive provides a small embedded server which allows serving any number of local hives through HTTPS.
Run the Hive-Remote-Serve launch configuration to run the server, serving the BHive created with the Hive-Import application.
Note: the sample launch configurations use the test-only pre-built certificates from the jersey project.
Remote List
The same list operation as before (Hive-Manifest-List) can be performed on the remote server as well, as long as Hive-Remote-Serve is still running.
Run Hive-Remote-List to try it - this will go through the HTTPS stack and perform the manifest listing remotely.
Remote Fetch & Push
The two tools fetch and push allow to fetch and push manifests along with all required objects from and to a remote hive.
Both operations are practically identical, just reversed. To demonstrate fetch, run Hive-Remote-Fetch and specify the manifest key you imported before. This will create a new (empty) hive (on first run) next to the one created by import (${workspace_loc}/runtime-hive-fetched
) and fetch this manifest into that hive. Make sure that Hive-Remote-Serve is still running for this to work.
If you are interested, you can try to import another folder which shares some parts with the first one you imported. Due to the separation of content and location, each common file will be stored only once, regardless of it’s location (name) in the tree. This will also enable fetch/push to transfer only missing objects.
FSCK
The FSCK (short for 'filesystem check') will check consistency of all objects and manifests in e BHive. Run Hive-FSCK to execute a FSCK on the sample BHive.
Delete
Manifests can be deleted from a BHive as well. Run Hive-Manifest-Delete to delete a manifest. Be sure to give the same manifest key as with Hive-Import.
Note that this operation only deletes the manifest, but not automatically all now-unreferenced objects from the object database. This is done by Prune.
Prune
Pruning is a cleanup operation which removes unreferenced objects from the storage. It is comparable to a git gc.
Run Hive-Prune after running Hive-Manifest-Delete to see the effects of pruning remaining objects. If you had only one manifest and deleted that, the result should be that the objects directory in the hive contains no files anymore (only empty directories).
TreeSnapshot & Co.
The ScanOperation
allows to fetch a TreeSnapshot
of a MANIFEST
root tree. This allows to recursively retrieve all available/relevant information about a MANIFEST
. This includes a listing of TREE
, BLOB
, MREF
, missing/damaged objects, etc.
The TreeDiff
allows to compare two TreeSnapshot
objects. It will produce a TreeElementDiff
for each element which is different in the two snapshots. This diff is based on the type and checksum of the according path entries in the snapshots. There is no actual content diff, but it is 'ease' to build one based on the available information.
PCU (Process Control Unit)
The PCU consumes a pre-rendered (meaning all parameter value variables are resolved, all paths are absolute, etc.) configuration to control and run locally.
The PCU does not make any assumptions about locations, etc. It relies completely on external sources to prepare a ProcessGroupConfiguration
and the according ProcessConfiguration
per configured application.
DCU (Deployment Control Unit)
The DCU’s InstanceNodeController
is responsible for consuming an InstanceNodeManifest
. An InstanceNodeManifest
is an artificial BHive manifest created by a configuration application, typically the web UI.
The DCU assumes that any referenced BHive MANIFEST
is locally available, i.e. has been pushed to the BHive associated with the DCU before passing a reference to it to the DCU.
Note that the DCU does not know about multiple nodes in a system. It assumes that everything is local, and so do all data structures in the DCU. The minion controlling the local DCU may have additional higher level structures to allow multiple DCUs distributed among nodes deploying for a single InstanceManifest
.
Backends
Right now, there is only a single DCU, which has the nodes physical disc as manifestation target.
The architecture is designed in a way to be able to later on have plug-able DCUs, which assume responsibility for a certain node type. This allows to implement specific DCUs later and mix and match them on a node level. For instance a special "Kubernetes" node could allow to drop applications into a cluster, having the cluster side-by-side with classic local application installation (but sharing configuration data, …).
Common
This bundle contains things shared by all components (BHive, DCU, PCU, Minion, …).
SecurityHelper
The io.bdeploy.common.security
package contains the SecurityHelper
which can be used to generate and verify keystores and access tokens for remote APIs (over HTTP).
Configuration
The io.bdeploy.common.cfg
package contains the Configuration
class which can be used to create command line tools and map their parameters to annotation proxies.
ActivityReporter
The ActivityReporter
implementations (Stream
and Null
) can be used to track activities/operations.
Metrics
The io.bdeploy.common.metrics
package contains an entry point to allow measurement of various metrics.
Troubleshooting
-
The JUnit5
@RegisterExtension
annotation allows to register instances of extensions (as opposed to@ExtendWith
, which registers a class and manages it’s lifecycle as apropriate). This means that the instance does not changed. This on the other hand means that the instance fields of this extension will keep their state throughout test methods. This is not dramatic, just something you need to be aware of. This is the reason whyTestMinion
callsresetRegistrations()
in it’sbeforeEach
method. Not doing so led to duplicate registrations of singleton services, where the server picked up the services for another test later on…
Minion (master/node) Server
The minion is the main BDeploy deliverable. It contains all the command tools as well as the server for remote services and the configuration web UI.
Use bdeploy init …
to initalize a minion root, then use bdeploy start …
to launch a master (including web UI) or a headless node (controlled by a master) from that root, depending on the mode given during init
.
Client Launcher
The client launcher picks up .bdeploy files passed on the command line and handles application download/synchronization with the origin server, as well as installation in a cache directory and launching of the installed application(s).
Interfaces
Model Class Naming
The classes in this project follow a strict naming scheme.
Class Name Suffix | Meaning |
---|---|
|
Things which are provided by the 'outside' world to describe artifacts processed by us. |
|
Holds data generated by us, based on *Descriptors, user input, or even thin air. |
|
Wraps and manages storage of *Configurations in the underlying BHives. Also handles enumeration/scanning for according objects in the BHive(s). |
|
(not actually in the interfaces project, but ui specific). Holds/groups data which is required in this form for user display/interaction in the UI. |
|
Interfaces for remote (JAX-RS) resources. |
|
Providers for variable expansion in configurations on the target system. |
Variable Expansion
This project contains the logic for resolving variables when deploying to a minion. The following schemes are supported.
Expansion of variables happens for launcher paths and parameter values. Additionally, all configuration files are post-processed on the target to perform the exact same expansion.
Provider | Variable Pattern | Supported Values | Example | Description |
---|---|---|---|---|
|
|
|
|
Expand the value of another parameter inline into the value of this parameter. |
|
|
|
|
Expand one of the special directories (see |
|
|
|
|
Expand values related to the instance containing the parameter’s process |
|
|
|
|
Expand the absolute installation path to another manifest’s path on the target system. |
|
|
|
|
Insert the given |
|
|
|
|
Expands to target minion properties - currently only |
Manifests
Almost all of BDeploy's storage is encapsulated in Manifests in a BHive. This holds true not only for actual application/product data, but for any configuration data as well. These have a few advantages over traditional storage:
-
Contents is immutable
-
Contents is validatable (through checksums)
-
Contents is automatically versioned
-
Contents history is available
There are a bunch of Manifests for several purposes.
Manifest Name | Purpose |
---|---|
|
Stores information (name, description, logo) related to an Instance Group. There is exactly one |
|
Stores information (name, description) related to a Software Repository. There is exactly one |
|
The |
|
Keeps track of the Applications and their configuration per configured Node in the Instance. |
|
Holds information about a product and keeps track of all the |
|
Holds information about a single application, includes the |
|
Holds information about available nodes on a master. |
|
Manages special per-user Manifests which hold information about each user. |
MetaManifests
In addition to traditional Manifest there are MetaManifest. These allow to attach certain information to other Manifests. This allows to update the attached MetaManifest independently of the immutable Manifest, whilst keeping all benefits of a Manifest for the MetaManifest as well (versioning, history, …).
MetaManifest Name | Purpose |
---|---|
|
Keeps track of Instance state related information (installed Instance versions, activated Instance version, etc.). |
|
Keeps track of the history of an Instance version, storing timestamps at which certain actions happened (creation, installation, activation, etc.). |
|
Keeps track of attached managed masters on a central master. |
|
Keeps track of the responsible controlling master for a single Instance. |
UI Front- and Backend
Contains the angular configuration frontend as well as the matching backend.
Theming
The app-theme.scss
currently supports a dark and a light theme. Components can contribute their own themes using SCSS mixins. This has the advantage that the component theme can access theme colors without knowing their exact definition. Whether the theme is dark or light (generally speaking) can be queried, so defining new (own) colors can be made dependent on that.
To create a theme mixin for any component:
-
Create a new .scss file in the component’s folder, e.g.
my-component.scss
. -
Import angular material theming support
-
Define a mixin with your theme selectors (attention, those are application global, so be careful with naming them).
-
Import and include the new mixin in the global
app-theme.scss
A very simple custom component’s scss file could look like this:
@import '~@angular/material/theming';
@mixin my-component-theme($theme) {
.my-component-class {
background-color: mat-color(map-get($theme, warn));
}
}
This will create a CSS class using the current themes 'warn' color as background.
Now you need to include and import the theme in the global app-theme.scss
file:
@import '~@angular/material/theming';
@include mat-core();
/* Custom component theme collection */
@import 'app/my-component/my-component-theme.scss';
@import ... (other component themes)
@mixin all-themes($theme) {
// general angular material theme
@include angular-material-theme($theme);
// application themes
@include my-component-theme($theme);
@include ... (other component themes)
}
... (rest of file)
Accessing theme properties in a mixin
Angular material defines a few properties. Those can be accessed from within a mixin easily. Basically a SCSS selector/mixin is just a map, so map-get
can be used to access the actual values.
@import '~@angular/material/theming';
@mixin my-component-theme($theme) {
$dark: map-get($theme, is-dark);
$color-light: rgb(230, 255, 230);
$color-dark: rgb(42, 56, 42);
.my-component-class {
background-color: if($dark, $color-dark, $color-light);
}
}
As you can see the is-dark
property of the theme is queried here. If the value is true, $color-dark
is used, otherwise $color-light
is used.
Available attributes are:
-
primary
- the primary palette, can be used withmat-color
-
accent
- the secondary palette, can be used withmat-color
-
warn
- the warn palette, can be used withmat-color
-
is-dark
- boolean determining whether the theme is dark or light. -
foreground
- complex map containing various foreground colors. -
background
- complex map containing various background colors.
Palettes
Every theme has a primary
, accent
and warn
palette. You can access those colors using mat-color
.
@mixin my-component-theme($theme) {
$accent-palette: map-get($theme, accent);
$bg-color-lighter: mat-color($accent-palette, lighter);
$bg-color-normal: mat-color($accent-palette);
$bg-color-darker: mat-color($accent-palette, darker);
.my-component-class {
background-color: $bg-color-lighter;
}
}
mat-color
accepts a palette as argument. This can be either of the theme palettes. An optional second argument can be either lighter
, darker
or a dedicated index into the palette to choose a certain color intensity explicitly.
Foreground Colors
The complex foreground map has the following named color definitions:
-
base
-
divider
-
dividers
-
disabled
-
disabled-button
-
disabled-text
-
elevation
-
hint-text
-
secondary-text
-
icon
-
icons
-
text
-
slider-min
-
slider-off
-
slider-off-active
Each of those can be accessed by map-get
-ting them:
@mixin my-component-theme($theme) {
$background-colors: map-get($theme, background);
.my-component-class {
background-color: map-get($background-colors, app-bar);
}
}
This example will use the app-bar
map key in the complex background
map entry of the current theme.
Background Colors
Analogous to foreground colors, these background colors are defined.
-
status-bar
-
app-bar
-
background
-
hover
-
card
-
dialog
-
disabled-button
-
raised-button
-
focused-button
-
selected-button
-
selected-disabled-button
-
disabled-button-toggle
-
unselected-chip
-
disabled-list-option
Using these foreground colors works exactly the same as background colors do.
Development Setup
BDeploy uses the Eclipse IDE for backend- and VSCode for frontend-development. You will need:
-
-
You need to have the Launch Configuration DSL (LcDsl) extension installed.
-
-
-
You need to have the Angular Essentials extension installed.
-
-
-
Some people prefer to use NVM
-
Environment Setup
-
You will want to set the
JAVA_HOME
environment variable to point to the path where you extracted the OpenJDK 11 package to assure that the build picks up the correct Java. -
Make sure that the
npm
command works from the command line where you will rungradle
builds (cmd.exe
on Windows,bash
on Linux). -
Install the Angular CLI globally for easier working by running
npm install -g @angular/cli
.
You need root permissions on Linux if you installed NodeJS (and NPM) through your distribution to be able to globally install the Angular CLI. |
Repository and Gradle Build
This documentation will assume the path /work/bdeploy
to be available, if not substitute it with any other path that works for you.
-
Clone the repository:
cd /work/bdeploy && git clone https://github.com/bdeployteam/bdeploy.git
-
Change to the repository directory:
cd bdeploy
-
Start the
gradle
build-
./gradlew build
on Linux (bash
) -
.\gradlew.bat build
on Windows (cmd.exe
) -
The build should conclude with a
BUILD SUCCESSFUL
message
-
The gradle
build will build test and package BDeploy. You can find build artifacts after the build here:
-
./launcher/build/distributions
- Distributable packages of the launcher application which is used to start client applications. -
./minion/build/distributions
- Distributable packages of the main BDeploy binaries. They contains the start (which runs BDeploy master and node servers) command as well as all CLI commands available to BDeploy Administrators. The distribution also contains all supported launcher packages as nested ZIP files. -
./interfaces/build/libs/bdeploy-api-*-all.jar
- The distributable API bundle including all external dependencies. This can be used to create additional integrations for BDeploy.
Additionally, documentation deliverables can be found in the ./doc/build/docs/dev
and ./doc/build/docs/user
directories (developer and end-user documentation respectively).
Eclipse Workspace
To be able to build, start and debug BDeploy backend applications from the Eclipse IDE, you need to perform some extra setup steps:
-
On the command line (see Repository and Gradle Build) generate the Eclipse IDE project files by running
./gradlew eclipse
(gradlew eclipse
on windows). -
Start the Eclipse IDE - choose a workspace, e.g.
/work/bdeploy/workspace
. -
Open the Git Repositories view.
-
Click Add existing local repository, browse to the repository location (e.g.
/work/bdeploy/bdeploy
) and add it. -
Right click the repository in the Git Repositories view and select Import Projects….
-
Select all projects with type Eclipse project
-
Don’t select the projects in the eclipse-tea folder.
-
Don’t select projects of type Maven.
-
This results in a complete BDeploy workspace setup which compiles automatically.
Running BDeploy from Eclipse
BDeploy uses LcDsl launch configurations to run binaries. You can find launch configurations in the Launch Configurations view.
-
Find and run (hint: right click) the
Master-Init
launch configuration.-
This will initialize a BDeploy master root inside the chosen workspace directory, e.g.
/work/bdeploy/workspace/runtime/master
. -
A default user will be created: username = admin, password = admin.
-
-
Finally find and run the
Master
launch configuration to spin up a BDeploy master.
The BDeploy master will host the Web UI also when started from the Eclipse IDE, but it will not work due to slightly different setup. You must use VSCode to host a matching Web UI for the backend run from Eclipse. |
Running BDeploy’s Web UI from VSCode
To spin up a matching frontend for the master started in Running BDeploy from Eclipse you need to start the Angular application from within VSCode.
-
On the command line, navigate the the
./ui/webapp
directory in the repository (e.g./work/bdeploy/bdeploy/ui/webapp
) and run VSCode in the current directory usingcode .
-
Open a terminal in VSCode and run
ng serve
to start the Angular development server. This will take a while to compile the Web UI.-
The terminal can be opened using
. -
The application will be started at http://localhost:4200 by default.
-
BDeploy’s backend is HTTPS only. It uses a self-signed certificate by default. This will require to accept the certificate in the browser before any communication (especially from within the Web UI) can happen. This makes it necessary to open a second tab in the browser and navigate to https://localhost:7701 to accept the security exception before the Web UI can communicate with the backend properly. Note that this URL will also load the Web UI once the security exception is in place, but will fail to start (see note above). |
Building for other platforms
You can build distribution packages for other platforms by installing their respective JDKs. You need to specify those JDKs as properties during the build. To simplify the process, you can create these entries in ~/.gradle/gradle.properties
:
systemProp.win64jdk=/path/to/jdks/windows/jdk-11.0.8+10
systemProp.linux64jdk=/path/to/jdks/linux/jdk-11.0.8+10
#systemProp.mac64jdk=/path/to/jdks/mac/jdk-11.0.8+10/Contents/Home
Of course you need to download those JDKs and adapt the paths to your environment. |
Plugins
BDeploy has plugin support, for now mainly to contribute custom parameter editors, for example to contribute an encryption algorithm to a password input field, etc.
From a low level viewpoint, plugins (as of now) may:
-
Contribute JAX-RS endpoints to the server side. These endpoints are provided in a dedicated JAX-RS application on the server, so basically all kinds of Providers, Features, Filters and Resources can be registered on the plugins behalf.
-
Contribute static assets. Those are served from a dedicated location per-plugin, and can host all kinds of static resources, including the required ES6 JavaScript modules used to provide custom editor UI.
-
Define custom editors and provide the matching ES6 module through the static resources mentioned above.
1 | The server will first load any global plugins that are installed. When loading a plugin, it is internally registered and a dedicated namespace is assigned where resources are served. The server will register the plugins JAX-RS endpoints and all static assets provided by the plugin under this context path. They are then directly accessible from the outside. |
2 | The Web UI will query for available custom editors once a parameter specifies it requires one (see app-info.yaml → customEditor in the end user documentation). |
3 | The Web UI’s plugin service will query the backend for plugins which can provide the requested editor(s). This will also demand-load plugins which are provided by the product backing the instance currently being edited. This works the same as for global plugins above. |
4 | Assuming the backend returned a plugin providing a suitable custom editor, the custom editor metadata includes the path to the JavaScript (ES6) module to load. The module is loaded into the Web application. |
5 | Then it is instantiated and passed an API object which can be used to transparently access the plugins exposed resources (JAX-RS API, static assets). |
6 | The custom editor is bound to the Web UI on demand and can now perform edit operations on parametes, using its backend resources as required. |
Plugin Distribution
Plugins are distributed and loaded in two ways:
-
A global plugin is a JAR file which is added to the server either through the command line or its UI. Global plugins provide functionality to be used by everything on the server globally on demand.
-
A product-bound (local) plugin is delivered through the product by configuring it in the
product-info.yaml
. See the user documentation for details.
How to create a plugin
A template to begin from is provided in the GitHub repository. It contains all the bits required:
-
A plugin project must compile against the BDelpoy API JAR. This can be found either in the GitHub packages Maven repository, or on the releases page as plain JAR.
-
A plugin must extend
io.bdeploy.api.plugin.v1.Plugin
class. It may override methods to provide its resources to BDeploy. -
A plugin must specify some important MANIFEST.MF headers which help the BDeploy plugin system with identifying plugins:
-
BDeploy-Plugin
: This header should contain the fully qualified class name of the class extending theio.bdeploy.api.plugin.v1.Plugin
class. -
BDeploy-PluginName
: A human readable name for the plugin helping administrators identifying the plugin. -
BDeploy-PluginVersion
: A version string, helping administrators identifying the plugins version.
-
-
A plugin can contain JAX-RS resource for the backend
-
A plugin can contain ES6 JavaScript modules which provide custom editor UI for the user. In theory, you can use whatever you like, as long as it has a single JavaScript entrypoint file which can be loaded from the UI (e.g. Stencil WebComponents have been tested - you still need to provide the single entrypoint class though).
-
The ES6 module must contain a default exported class, so BDeploy can instantiate the plugin withough knowing its class names.
-
The JavaScript Plugin API
The Java API is pretty straight forward, and can be seen immediately when looking at the io.bdeploy.api.plugin.v1.Plugin
class. It is a little bit different for the JavaScript API, as there are no interfaces you can look at. Internally, BDeploy uses TypeScript and thus it has an interface definition.
The Custom Editor API
The interface for a custom editor looks like this:
export interface EditorPlugin {
new(api: Api); (1)
bind(onRead: () => string, onUpdate: (value: string) => void, onValidStateChange: (valid: boolean) => void): HTMLElement; (2)
}
A custom editor definiton in the plugins main class points to an ES6 module, whos default class must implement this interface.
1 | The constructor accepts an API object, which can be used to interface with the JAX-RS resources of the plugin. |
2 | The bind method receives callbacks for communication with the configuration web UI, and returns an HTMLElement which has to be created by the plugin. |
The API Object
The API Object is passed to the plugins constructor. It allows communication with the plugins backend, without the need for the plugin to know where exactly this API is hosted on the server.
export interface Api {
get(path: string, params?: {[key: string]: string}): Promise<any>(1)
put(path: string, body: any, params?: {[key: string]: string}): Promise<any>;
post(path: string, body: any, params?: {[key: string]: string}): Promise<any>;
delete(path: string, params?: {[key: string]: string}): Promise<any>;
getResourceUrl(): string; (2)
}
1 | The get , put , post and delete methods can be used to issue according requests to the plugins JAX-RS resources. |
2 | The resource URL can be used to load static resources. The URL will be the base URL where static and JAX-RS resources are registered. |
Release Procedure
Preparation
Make sure to run a manual test and check at least this before releasing:
-
Setup a fresh server.
-
Create an instance with a client application.
-
Download the installer for the client and check if it is working as intended.
The rest is covered pretty well by automated tests already.
Branches, Tags & GitHub Release
The steps outlined in this chapter are usually encapsuled in a single automated release job (Jenkins, …). |
Prerequisites:
-
Know the version you want to release (i.e.
RELEASE_VERSION
) -
Know the "next" version, which will be the one set as active version after the release (i.e.
NEXT_VERSION
). -
A GitHub account (i.e.
GH_USER
) and a token (i.e.GH_TOKEN
) with the permission to create and update a release on the BDeploy GitHub repository. -
A SonaType account (i.e.
SONATYPE_USER
) and a token (i.e.SONATYPE_TOKEN
) with the permission to upload and release artifacts tooss.sonatype.org
targeting maven central. -
A GPG Key which is registered with sonatype which can be used to sign the application JAR files for upload to maven. You need the key file (i.e.
GPG_FILE
), the ID of the key (i.e.GPG_ID
) and the password to the key file (i.e.GPG_PASS
). -
A clone of the repository - since right now a internal repository is used as well as the GitHub repository, you need a clone of the internal repository
-
An empty directory where JDKs can be downloaded to. Set the path in the environment variable
JDK_DL_ROOT
Steps:
-
Set the environment variable with the according data prepared in the prerequisites:
GH_USER
,GH_TOKEN
,SONATYPE_USER
,SONATYPE_TOKEN
,GPG_FILE
,GPG_ID
andGPG_PASS
. Also setRELEASE_VERSION
andNEXT_VERSION
, or pass the values directly to the command. -
Execute
./release.sh ${RELEASE_VERSION} ${NEXT_VERSION}
in the repository. Make sure that the repository has the current master branch checked out and that it is ready to be released.
This will:
-
Download the most current version of the JDK to be used.
-
Set the release version in the source repository.
-
Run the build with all tests but not the binary release tests.
-
Updates documentation screenshots from the UI tests.
-
Publish artifacts to the
oss.sonatyp.org
server. -
Note: A separate release step has to be performed manually later, see below.
-
Commit changed files (version, screenshots, test-data) locally (i.e. 'Release ${RELEASE_VERSION}').
-
Push the commit to the GitHub repositories master branch.
-
Publish the built artifacts to a newly created GitHub release.
-
Set the next version in the source repository.
-
Add the just-published version to the versions to be tested in binary release tests.
-
Run the build using the new version without any tests but the binary release test - which will now verify that updating to the "next" version is possible from the just-released release.
-
Commit changed file (version, test-data) locally (i.e. 'Update to ${NEXT_VERSION}').
release.sh will push the release commit to GitHub, but will not push any commit to the origin of the repository. If the internal repository is used, you need to push there manually. Also the update to NEXT_VERSION is not pushed at all, and needs to be pushed separately.
|
Before pushing to the internal repository, make sure to execute a git fetch --all --tags , since GitHub will create the release tag for us in the GitHub repository automatically.
|
Release notes
While release.sh
is running, you can gather release notes by looking at the individual commits that happened since the last release. This is most easily done by looking at the output of:
git log --no-merges v3.5.0..
assuming that 3.5.0
was the last release. Scroll to the bottom of the output and work your way upwards commit after commit. Not every commit needs mentioning in the release notes, but quite often nearly every commit ends up in some or another way in the release notes.
MavenCentral release
After running release.sh
and pushing all the commits to their respective targets, oss.sonatype.org
will contain a new staging repositories for the maven artifacts published by the build.
You need to:
-
Log in to see anything useful.
-
Navigate to Staging Repositories.
-
Select the
iobdeploy-XXXX
repository where XXXX is any number. -
In the lower part of the screen, go to Content and check whether the content of the repository looks complete and OK.
-
Select the repository in the upper part of the screen and click Close.
-
Wait a few minutes and refresh the view using the Refresh button.
-
Once enabled, click the Release button which having the repository selected. You can leave the "Drop automatically" checked, this way nothing has to be done after clicking OK anymore.
This will release the new version to maven central. This can take a few minutes, up to half an hour. Also the maven central index can take up to 24 hours to refresh - this is what is used to display data on the maven central homepage. Thus it may be that you cannot find the new version on the homepage, but can already download it using Maven/Gradle.
Documentation Update
After the release has been made, we need to update the documentation on the official homepage.
Prerequisites:
-
The BDeploy main repository, having the
Release X.X.X
release commit/tag checked out (!). -
A clone of the official BDeploy homepage repository.
In the BDeploy source repository, change into the doc
directory and run
../gradlew build
This will create the documentation artifacts in the build/docs/
subdirectory, dev
and user
for the developer and user documentation respectively.
Change in each directory, open index.html
and verify that the correct release version number can be seen on the documentation index.
Change to the BDeploy homepage repository and delete the dev
and user
directories completely.
Copy the dev
and user
directories from the BDeploy source repositories doc/build/docs/
directory to the homepage repository. Commit the change and push it to the origin repository. The rest is done automatically by GitHub.
Build Tool Integration Plugin Update
The BDeploy source repository also hosts various build tool integrations as well as test projects for some features (plugins, build tools). After the release they need to be updated as well.
Since they need to maven artifacts published earlier, you need to make sure that those are already available from maven central.
-
plugins/build-tool-gradle
- the Gradle integration. -
plugins/gradle-plugin-test-project
- a test project for the Gradle integration. -
plugins/bdeploy-demo-plugin
a simple demo BDeploy plugin using the public API.
Last but not least, there is also plugins/build-tool-tea
- the Eclipse TEA integration. This needs to be updated separately in an Eclipse TEA enabled workspace.
For all the others, updating is done using gradle-upgrade-interactive
. You need to have that installed globally using npm:
npm install -g gradle-upgrade-interactive
Once this is available, cd
into each of the directories and run gradle-upgrade-interactive
. You will be presented a list of things to update in the given plugin. Select all of them and confirm.
In gradle-plugin-test-project you will not see a BDeploy API jar update, as this project uses BDeploy only indirectly through the build-tool-gradle project - this is OK.
|
Now build each of the projects, and confirm that everything is OK with the new BDeploy release. Finally commit the changes, and you’re done.
Publish Gradle Plugin
You will need to have a gradle account and the permission to publish in the io.bdeploy
namespace.
Make sure to setup gradle.properties in your home directory according to the instructions on the gradle manuals (i.e. set the gradle.publish.key
and gradle.publish.secret
properties).
Execute ./gradlew publishPlugins
in the plugins/build-tool-gradle
folder.
Make sure to check on the plugin portal if the new version was published.