udoprog.github.io

Building services with reproto

APIs are ubiquitous.

The most popular form by far are JSON-based HTTP APIs (all though GraphQL are giving them a run for their money). Sometimes these are referred to as restful - because we collectively have an aversion towards taking REST seriously.

This post isn’t about REST. It’s about a project I’ve been working on for the last year to handle the lifecycle of JSON-based APIs:

Rethinking Protocols - reproto.

reproto is a number of things, but most importantly it’s an interface description language (IDL) in which you can write specifications that describe the structure of JSON objects. This IDL aims to be compact and descriptive.

A simple .reproto specification looks like this:

# File: src/cats.reproto

type Cat {
  name: string;
}

This describes an object which has a single field name, like: {"name": "Charlie"}.

Using reproto, we can now generate bindings for this in various languages.

$ reproto build --lang rust --package cats --path src --out src/generated

For Rust, this would be using Serde:

// File: src/generated/cats.rs

#[derive(Serialize, Deserialize, Debug)]
struct Cat {
  name: String,
}

In Java, Jackson would be used:

// File: src/main/java/Cat.java

import lombok.Data;

@Data
public static class Cat {
  private final String name;

  @JsonCreator
  public Cat(@JsonProperty("name") final String name) {
    this.name = name;
  }
}

reproto tries to integrate with the target language using the best frameworks available1.

Dependencies

A system is something greater than the sum of its parts.

Say you want to write a service that communicate with with many other services, it’s typically painful and error prone to copy things around by yourself.

To solve this reproto is not only a language specification, but also a package manager.

Provide reproto with a build manifest in reproto.toml like this:

language = "rust"
output = "src/generated"

[modules.chrono]

[packages]
"io.reproto.toystore" = "^1"

Run:

$ reproto update
$ reproto build

And reproto will have downloaded and built io.reproto.toystore from the central repository.

Importing a manifest from somewhere else inside of a specification will automatically use the repository:

use io.reproto.toystore "^1" as toystore;

type Shelf {
  toys: [toystore::Toy];
}

Dealing with many different versions of a package is handled through clever namespacing.

This makes it possible to import and use multiple different versions of a specification at once:

use io.reproto.toystore "^1" as toystore1;
use io.reproto.toystore "^2" as toystore2;

type Shelf {
  toys: [toystore1::Toy];
  toys_v2: [toystore2::Toy];
}

Documentation

Good documentation is key to effectively using an API.

reproto comes with a built-in documentation tool in reproto doc, which will generate documentation for you by reading rust-style documentation comments.

You can check out the example documentation for io.reproto.toystore here.

Fearless versioning

With package management comes the problems associated with breaking changes.

reproto insists on using semantic versioning, and will actively check that any version you try to publish doesn’t violate it:

$ reproto publish
src/io/reproto/toystore.reproto:12:3-22:
 12:   category: Category;
       ^^^^^^^^^^^^^^^^^^^ - minor change violation: field changed to be required
io.reproto.toystore-1.0.0:12:3-23:
 12:   category?: Category;
       ^^^^^^^^^^^^^^^^^^^^ - from here

This is all based on a module named semck that operates on the AST-level.

Not everything is covered yet, but it’s rapidly getting there.

Finally

In contrast to something like purely an api specification language, reproto aims to be a complete system to hold your hands during the entire lifecycle of service development.

My litmus test will be when I’ve produced a mostly generated client for Heroic, which is well on its way.

It’s also written in Rust, a language where a lot of these ideas have been shamelessly stolen from.

There is still a lot of work to be done! If you are interested in the problem domain and have spare cycles, please join me on Gitter.

Comments on reddit.

  1. The exact approach is configurable through modules documented under Language Support

Rust applications under Wine

While writing my last post I had the need to compile and run some code under Windows.

Being a Linux fanbox, this situation wasn’t optimal. Enter Wine.

Wine is a fantastic system. With an initial release 24 years ago, it’s grown to encompass incredible things like a full implementation of DirectX 9, providing very compelling gaming performance for Windows-only games on Linux.

It also behaves like Windows when you run Rust-based applications on it.

This post is a quick tip for how you can setup a flexible environment for compiling and testing small Rust applications on Linux. That behave like they would on Windows.

Installation

Install Wine, with whatever your preferred method is.

Under Fedora, you would use DNF:

$> sudo dnf install wine

Download the installer for Rust from https://www.rust-lang.org/en-US/other-installers.html

Like version 1.21 (Stable at the time):

$> wget https://static.rust-lang.org/dist/rust-1.21.0-i686-pc-windows-gnu.msi

Note: Make sure that you download the installer for i686 using a GNU toolchain.

Now, run the installer with wine:

$> wine msiexec /i rust-1.21.0-i686-pc-windows-gnu.msi

After the installer is done, you can create the following helper scripts in /usr/local/bin/rust-wine:

#!/usr/bin/env bash
set -e
base=$1
shift
[[ -z $base ]] && echo "Usage: $0 <command> [args]" && exit 100
exec wine $HOME/.wine/drive_c/Program\ Files\ \(x86\)/Rust\ stable\ GNU\ 1.21/bin/${base}.exe "$@"

You might want to modify the path to the Rust installation to suit your needs.

Let’s create a simple Hello World and take it for a spin:

$> cat > test.rs <<ENDL
fn main() {
  println!("dir: {:?}", ::std::env::current_dir().unwrap());
}
ENDL
$> rust-wine rustc test.rs
$> wine test.exe
Hello World

Enjoy!

Portability concerns with Path

I’ve been spending most of my spare time working on ReProto, and I’m at a point where I need to support specifying a per-project build manifest.

In this manifest I want to give the user the ability to specify build paths. The problem I faced is: How do you have a path specification that is portable?

The build manifest will be checked into git repositories. It will shared in verbatim across platforms, and users would expect it to work without having to convert any paths specified in it to their native representation. This is very similar to how a build configuration is provided to cargo through Cargo.toml. It would really suck if you’d have to convert all back-slashes to forward-slashes, just because the original author of a library is working on Windows.

Rust has excellent serialization support in the form of serde. The following is an example of how you can use serde to deserialize TOML whose structure is determined by a struct.

extern crate toml;
#[macro_use]
extern crate serde_derive;

use std::path::PathBuf;

#[derive(Debug, Deserialize)]
pub struct Manifest {
    paths: Vec<PathBuf>,
}

const FILE: &'static str = "paths = ['extra', 'src/main/reproto']";

pub fn main() {
    let manifest: Manifest = toml::from_str(FILE).unwrap();
    println!("{:?}", manifest);
}

We’ve deserialized a list of paths so our work seems like it’s mostly done.

In the next section I will describe some details around platform-specific behaviors in Rust, and how they come back to bite us in this case.

Platform behaviors

Representing filesystem paths in a platform-neutral way is an interesting problem.

Rust has defined a platform-agnostic Path type which has system-specific behaviors implemented in libstd. For example, in Windows it deals with a prefix consisting of the drive letter (e.g. c:).

The effect for our manifest is that using PathBuf would permit our application to accept and operate over paths specified in different ways. The exact of which depends on which platform your application is built for.

This is no good for configuration files that you’d expect people to share across platforms. One representation might be valid on one platform, but not on others.

The following snippet exemplifies the problem:

extern crate toml;
#[macro_use]
extern crate serde_derive;

use std::path::{PathBuf, Path};

#[derive(Debug, Deserialize)]
pub struct Manifest {
    paths: Vec<PathBuf>,
}

const FILE: &'static str = "paths = ['foo\\bar']";

pub fn main() {
    let manifest: Manifest = toml::from_str(FILE).unwrap();

    if let Some(path) = manifest.paths.iter().next() {
        let p = Path::new(".").join(path).join("baz");

        println!("path = {:?}", p);
        println!("components = {:?}", p.components().collect::<Vec<_>>());
    }
}

On Windows, it would give this output:

path = "./foo\\bar/baz"
components = [CurDir, Normal("foo"), Normal("bar"), Normal("baz")]

While on Linux, it would behave differently with:

path = "./foo\\bar/baz"
components = [CurDir, Normal("foo\\bar"), Normal("baz")]

foo\\bar is treated like a path component, because backslash (\) is not a directory separator on Linux. The implementation of Path on Linux reflects this.

This means that mutator functions in Rust will treat this as a component when determining things like what the parent directory of a given path is:

use std::path::Path;

pub fn main() {
    let path = Path::new("root").join("foo\\bar");
    let parent = path.parent();
    println!("parent = {:?}", parent);
}

On Windows:

parent = Some("root\foo")

On Linux:

parent = Some("root")

Portable paths

Path by itself provides a portable API. PathBuf::push and Path::join are ways to manipulate a path on a per-component basis. The components themselves might have restrictions on which character sets may be used, but at least the path separator can be abstracted away.

Another major difference is how filesystem roots are designated. Windows, interestingly enough, have multiple roots - one for each drive. Linux only has one: /.

With this in mind we can write portable code that only manipulates relative paths. These works independently of which platform it is running on:

use std::path::Path;
use std::env;

fn main() {
    let base = env::current_dir().unwrap();
    let target = base.join("foo").join("bar");
    println!("target = {:?}", target);
}

On Windows this gives:

target = "C:\\Users\\udoprog\\foo\\bar" 

And on Linux:

target = "/home/udoprog/foo/bar"

Notice that the relative foo/bar traversal is maintained.

The realization I had is that you can have a portable description if you can describe a path only in terms of its components, without filesystem roots.

Neither c:\foo\bar\baz nor /foo/bar/baz are portable descriptions, foo/bar/baz is. It simply states; please traverse foo, then bar, then baz, relative to some directory.

Combining this relative path with a native path allows it to be translated into a platform-specific path. This path can then be used for filesystem operations.

This is the premise behind a new crate I created named relative-path, which I will be covering briefly next.

Relative paths and the real world

In the relative-path crate I’ve introduces two classes: RelativePath and RelativePathBuf. These are analogous to the libstd classes Path and PathBuf. A fairly significant chunk of code could be reimplemented based on these classes.

The differences from their libstd siblings are small, but significant:

  • The path separator is set to a fixed character (/), regardless of platform.
  • Relative paths cannot represent an absolute path in the filesystem, without first specifying what they are relative to through to_path.

The second rule is important to either determine the actual relativeness of a Path, or which filesystem root or drive it belongs to.

This permits using RelativePathBuf in cases where having a portable representation would otherwise cause problems across platforms. Like with build manifests checked into a git repository:

extern crate toml;
#[macro_use]
extern crate serde_derive;
extern crate relative_path;

use relative_path::RelativePathBuf;
use std::path::{PathBuf, Path};

#[derive(Debug, Deserialize)]
pub struct Manifest {
    paths: Vec<RelativePathBuf>,
}

const FILE: &'static str = "paths = ['foo/bar']";

pub fn main() {
    let manifest: Manifest = toml::from_str(FILE).unwrap();

    if let Some(path) = manifest.paths.iter().next() {
        let p = path.to_path(Path::new(".")).join("baz");

        println!("path = {:?}", p);
        println!("components = {:?}", p.components().collect::<Vec<_>>());
    }
}

My hope is that you from now on folks won’t be relegated to storing stringly typed fields and is forced to figure out the portability puzzle for themselves.

Final notes

Character restrictions are still a problem. At some point we might want to incorporate replacement procedures, or APIs that return Result to flag for non-portable characters.

Using a well-defined path separator gets us pretty far regardless.

Thank you for reading this. And please give me feedback on relative-path if you have the time.

Comments on Reddit.

Patching ThreadPoolExecutor to handle Errors

In this post I’ll describe an important patch that you always want to use when using a ThreadPoolExecutor (or any ExecutorService) in Java.

Edit (2017-11-05): Since JDK 8u92, there is a new option called -XX:ExitOnOutOfMemoryError that can effectively be used instead.

The patch intends to mitigate the unexpected death of threads, and mitigate the impact that they have on your application.

To help illustrate illustrate this, here is an example project with a very nasty thread eating up all memory:

public class Example {
    private static final int MESSAGE_SIZE = 1024 * 1000;

    public static void main(String[] argv) throws Exception {
        final ExecutorService executor =
            new ThreadPoolExecutor(2, 2, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>());

        final TransferQueue<long[]> queue = new LinkedTransferQueue<>();

        executor.submit(new BadThread());

        executor.submit(() -> {
            while (true) {
                queue.transfer(new long[MESSAGE_SIZE]);
            }
        });

        while (true) {
            System.out.println("main: waiting for message...");
            queue.take();
            System.out.println("main: OK");
            Thread.sleep(500);
        }
    }

    /**
     * A bad thread eating up all available memory and holding on to it.
     */
    static class BadThread implements Callable<Void> {
        @Override
        public Void call() throws Exception {
            Thread.sleep(1000);

            System.out.println("BadThread: Start 'borrowing' memory...");

            final List<Long> list = new ArrayList<>();

            while (true) {
                try {
                    list.add(0L);
                } catch (final OutOfMemoryError error) {
                    System.out.println("BadThread: Hold on to OOM: " + error);
                    Thread.sleep(10000);
                }
            }
        }
    }
}

Compile and run this application with -Xmx16m. You should see something like the following:

main: waiting for message...
main: OK
main: waiting for message...
main: OK
BadThread: Start 'borrowing' memory...
main: waiting for message...
main: OK
BadThread: Hold on to OOM: java.lang.OutOfMemoryError: Java heap space
main: waiting for message...
...

The application is stuck, we are no longer seeing any main: OK messages. No stack traces, nothing.

The reason is that out coordinator thread allocates memory for its message, this means that it could be the target of an OutOfMemoryError when the allocation fails because BadThread has locked up all available memory and is refusing to die.

This state is when it gets interesting. ThreadPoolExecutor will, as per documentation, happily catch and swallow any exception being thrown in one of its tasks. It is explicitly left to the developer to handle this.

This leaves us with a dead coordinator thread at the other end of the Queue, and main is left to its own devices forever. :(.

The afterExecute patch

This patch is derived from this StackOverflow answer and can be applied to ThreadPoolExecutor.

final ExecutorService executor = new ThreadPoolExecutor(2, 2, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>()) {
    protected void afterExecute(Runnable r, Throwable t) {
        super.afterExecute(r, t);

        if (t == null && r instanceof Future<?>) {
            try {
                Future<?> future = (Future<?>) r;

                if (future.isDone()) {
                    future.get();
                }
            } catch (CancellationException ce) {
                t = ce;
            } catch (ExecutionException ee) {
                t = ee.getCause();
            } catch (InterruptedException ie) {
                Thread.currentThread().interrupt(); // ignore/reset
            }
        }

        if (t != null) {
            if (t instanceof Error) {
                try {
                    System.err.println("Error in runnable: " + r);
                    t.printStackTrace(System.err);
                    System.err.println(
                        "This is an unrecoverable error, shutting down...");
                } finally {
                    System.exit(1);
                }
            }

            System.out.println(t);
        }
    }
};

This patch overrides the afterExecute method. A hook designed to allow for custom behavior after the completion of tasks.

Run the project again, and you should see the following:

main: waiting for message...
main: OK
main: waiting for message...
main: OK
BadThread: Start 'borrowing' memory...
main: waiting for message...
main: OK
BadThread: Hold on to OOM: java.lang.OutOfMemoryError: Java heap space
Error in runnable: java.util.concurrent.FutureTask@5cf149bb
java.lang.OutOfMemoryError: Java heap space
    at com.spotify.heroic.ExecutorServicePatch.lambda$main$0(ExecutorServicePatch.java:63)
    at com.spotify.heroic.ExecutorServicePatch$$Lambda$1/495053715.call(Unknown Source)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
This is an unrecoverable error, shutting down...

Process finished with exit code 1

Errors

I want to emphasise that OutOfMemoryError is generally not an error that you can safely recover from. There are no guarantees that the thread responsible for eating up your memory is the target for this error. Even if that is the case, this thread might become important at a later stage in its life. In my opinion, the most reasonable thing to do is to give up.

An Error is a subclass of Throwable that indicates serious problems that a reasonable application should not try to catch. Most such errors are abnormal conditions.

At this stage you might be tempted to attempt a clean shutdown of your application on errors. This might work. But we might also be in a state where a thread critical towards the clean shutdown of your application is no longer alive. There might not be any memory left to support a complex shutdown. Attempting it could lead to your cleanup attempt crashing leading us back to where we started.

If you want to cover manually created threads, you can make use of Thread#setDefaultUncaughtExceptionHandler. Just remember, this still does not cover thread pools.

On a final note, if you are a library developer: Please don’t hide your thread pools from us.

Semantic Versioning and Java

In this post, about semantic versioning, and how I believe it can be efficiently applied for the benefit of long-term interoperability of Java libraries.

Let us introduce the basic premise of semantic versioning (borrowed from their page), namely version numbers and the connection they have to the continued development of your software.

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

Hello Java

Java has a lot of things which could qualify as members your public API. The most distinct feature in the language is the interface, a fully abstract class definition that forces you to describe all possible interactions that are allowed with a given implementation.

So let’s build an API using that.

package eu.toolchain.mylib;

/**
 * My Library.
 *
 * @since 1.0
 */
public interface MyLibrary {
    /**
     * Do something.
     */
    public void doSomething();
}

Consider @since, here it doesn’t contain the patch version. It could, but it wouldn’t make a difference. A patch must never modify API, that privilege is left to the major, and the minor version.

Maven plays an important role here as well. The Java ecosystem relies on it to distribute libraries and resolve dependencies. The way you would expose your library is by putting the above in an API artifact named eu.toolchain.mylib:mylib-api. You might also feel compelled to provide an implementation, this could be eu.toolchain.mylib:mylib-core.

The separation is not critical, but it helps in being explicit in what your public API is. Both for you and your users.

My intent is to have your users primarily interact with your library through interfaces, abstract classes, and value objects.

A Minor Change

Let us introduce a minor change to the library.

package eu.toolchain.mylib;

public interface MyLibrary {
    /* .. */

    /**
     * Do something else.
     *
     * @since 1.1
     */
    public void doSomethingElse();
}

In library terms, we are exposing another symbol. For Java, this is just another method with a given signature added to the already existing MyLibrary interface.

This only constitutes a minor change because consumers of the API which happen to use 1.0 will happily continue to operate in a runtime containing 1.1. Anything linked against 1.0 will be oblivious to the fact that there is added functionality in 1.1. This is due to indirection that is introduced by Java, method calls use a very flexible symbolic reference to indicate the target of the invocation.

Removing a method and not fixing all callers of it would eventually cause NoSuchMethodError. Eventually, because it would not be triggered until a caller attempts the invocation at runtime. Ouch.

What qualifies as a minor change

Identifying what qualifies as a minor change, and what does not, is one of the harder aspects we need to deal with. It requires a bit of knowledge in how binary compatibility works.

The Eclipse project has compiled an excellent page on this topic which touches a few more cases. For all the gritty details, you should consult Chapter 13 of the Java Language Specification.

I’ll touch on a few things that are compatible, and why.

Increasing visibility

Increasing the visibility of a method is a minor change.

Visibility goes with the following modifiers, from least to most visible:

  • private
  • package protected (no modifier)
  • protected
  • public

From the perspective of the user, a think is not part of your public API if it is not visible.

Adding a method

This works, because method invocations only consult the signature of the method being called, which is handled indirectly by the virtual machine who is responsible for looking up the method at runtime.

So this is good unless the client implements the given API.

package eu.toolchain.mylib;

/**
 * ... boring documentation ...
 *
 * <em>avoid using directly</em>, for compatibility extend one of the provided
 * base classes instead.
 *
 * @see AbstractMyCallback
 */
public interface MyCallback {
    /**
     * @since 1.0
     */
    public boolean checkSomething();

    /**
     * Oops, sorry client :(
     *
     * @since 1.1
     */
    public boolean checkSomethingElse();
}

If you are exposing an API that the client should implement, a very popular compromise is to provide an abstract class that the client must use as a base to maintain compatibility.

/**
 * A base implementation of {@link MyCallback} that will maintain compatibility
 * for you.
 */
public abstract AbstractMyCallback implements MyCallback {
    /**
     * Should be implemented by client, but if they are using a newer version
     * of the library this will maintain the behavior.
     */
    @Override
    public boolean checkSomethingElse() {
        return false;
    }
}

You as a library maintainer must maintain this class to make sure that between each minor release it does not force clients to have to implement methods they previously did were not required to.

To see this in action, check out SimpleTypeVisitor8 which is part of the interesting java.lang.model API.

Extending behaviour

This one is tricky, but probably the most important to understand.

If you have a documented behavior in your API, you are not allowed to remove or modify it.

In practice, it means that once your javadoc asserts something, that assertion must be versioned as well.

package eu.toolchain.mylib;

/**
 * @since 1.0
 */
public interface MyLibrary {
    /**
     * Create a new black hole that will slowly consume the current Galaxy.
     */
    public void createBlackHole();
}

You may extend it in a manner, which does not violate the existing assertions.

package eu.toolchain.mylib;

/**
 * @since 1.0
 */
public interface MyLibrary {
    /**
     * Create a new black hole that will slowly consume the current Galaxy.
     *
     * The initial mass of the black hole will be 10^31 kg.
     */
    public void createBlackHole();
}

You may not however, change the behavior from current Galaxy to Milky Way.

package eu.toolchain.mylib;

/**
 * @since 1.0
 */
public interface MyLibrary {
    /**
     * Create a new black hole that will slowly consume the Milky Way.
     */
    public void createBlackHole();
}

Your users will have operated under the assumption that the current galaxy will be consumed.

Imagine their surprise when they run the newly upgraded application in the Andromeda Galaxy and they inadvertently expedite their own extinction because they didn’t expect a breaking change in behavior for a minor version :/.

A Major Change

Ok, so it’s time to rethink your library’s existence. The world changed, you’ve grown and realized the errors of your way. It’s time to fix all the design errors you made in the previous version.

package eu.toolchain.mylib2;

/**
 * My Library, Reloaded.
 * @since 2.0
 */
public interface MyLibrary {
    /**
     * Do something, _correctly_ this time around.
     * @since 2.0
     */
    public void doSomething();
}

In order to introduce a new major version, it is important to consider the following:

  • Do I need to publish a new package?
  • Do I need to publish a new Maven artifact?
  • Should I introduce the changes using @Deprecated?

This sounds rough, but there are a few points to all this.

Publishing a new package

To maintain binary compatibility with the previous Major version.

There are no easy take-backs once an API has been published. You may communicate to your clients that something is deprecated, and it is time to upgrade. You cannot force an atomic upgrade.

If you introduce a Major change that cannot co-exist in a single classpath. Your users are in for a world of pain.

Publishing a new Maven artifact

To allow your users to co-depend on the various major versions of your library. Maven will only allow one version of a <groupId>:<artifactId> combination to exist within a given build solution.

For our example, we could go from eu.toolchain.mylib:mylib-api to eu.toolchain.mylib:mylib2-api.

If you don’t change the artifact, Maven will not allow a user to install all your major versions. More importantly, any transitive dependencies requiring another major version will find themselves lacking.

Using @Deprecated to your advantage

@Deprecated is a standard annotation discouraging the use of the element that is annotated.

This has wide support among IDEs, and will typically show up as a warning when used.

You can use this to your advantage when releasing a new Major version.

Assume that you are renaming a the following #badName() method.

package eu.toolchain.mylib;

/**
 * @since 1.0
 */
public interface MyLibrary {
    /**
     * A poorly named method.
     */
    public void badName();
}

Into #goodName().

package eu.toolchain.mylib2;

/**
 * @since 2.0
 */
public interface MyLibrary {
    /**
     * A well-named method.
     */
    public void goodName();
}

You can go back and release a new minor version of your 1.x branch containing the newly named method with a @Deprecated annotation.

package eu.toolchain.mylib;

/**
 * @since 1.0
 */
public interface MyLibrary {
    /**
     * A poorly named method.
     *
     * @deprecated Will be removed in 2.0 since the name is obviously inferior.
     *             Use {@link #goodName()} instead.
     */
    @Deprecated
    public void badName();

    /**
     * A well-named method.
     *
     * @since 1.1
     */
    public void goodName();
}

This is an excellent way of communicating what changes your users can expect, and can be applied to many situations.

Case studies

Project jigsaw

Project jigsaw is an initiative that could improve things in the near future by implementing a module system where dependencies and versions are more explicit.

The specification will not require implementations to support multiple versions of the same module, but it should be possible to hook into the module discovery process in a manner that supports it.

Final Words

Dependency hell is far from solved, but good practices can get us a long way.

Good luck, library maintainer. And may the releases be ever in your favor.