The Cluster class

Using DC/OS E2E usually involves creating one or more Clusters. A cluster is created using a “backend”, which might be Docker or a cloud provider for example. It is also possible to point DC/OS E2E to existing nodes. A Cluster object is then used to interact with the DC/OS cluster.

class dcos_e2e.cluster.Cluster(cluster_backend, masters=1, agents=1, public_agents=1)

Create a DC/OS cluster.

Parameters:
  • cluster_backend – The backend to use for the cluster.
  • masters – The number of master nodes to create.
  • agents – The number of agent nodes to create.
  • public_agents – The number of public agent nodes to create.

Choosing a Backend

See Backends for a backend to use for cluster_backend.

Creating a Cluster from Existing Nodes

It is possible to create a Cluster from existing nodes. Clusters created with this method cannot be destroyed by DC/OS E2E. It is assumed that DC/OS is already up and running on the given Nodes and installing DC/OS is not supported.

classmethod Cluster.from_nodes(masters, agents, public_agents)

Create a cluster from existing nodes.

Parameters:
  • masters – The master nodes in an existing cluster.
  • agents – The agent nodes in an existing cluster.
  • public_agents – The public agent nodes in an existing cluster.
Return type:

Cluster

Returns:

A cluster object with the nodes of an existing cluster.

Installing DC/OS

Some backends support installing DC/OS from a path to a build artifact. Some backends support installing DC/OS from a URL pointing to a build artifact. See how to use DC/OS Enterprise with DC/OS E2E.

Cluster.install_dcos_from_path(build_artifact, dcos_config, ip_detect_path, files_to_copy_to_genconf_dir=(), output=<Output.CAPTURE: 2>)
Parameters:
  • build_artifact – The Path to a build artifact to install DC/OS from.
  • dcos_config – The DC/OS configuration to use.
  • ip_detect_path – The path to a ip-detect script that will be used when installing DC/OS.
  • files_to_copy_to_genconf_dir – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.
  • output – What happens with stdout and stderr.
Raises:

NotImplementedErrorNotImplementedError because it is more efficient for the given backend to use the DC/OS advanced installation method that takes build artifacts by URL string.

Return type:

None

Cluster.install_dcos_from_url(build_artifact, dcos_config, ip_detect_path, output=<Output.CAPTURE: 2>, files_to_copy_to_genconf_dir=())

Installs DC/OS using the DC/OS advanced installation method.

If supported by the cluster backend, this method spins up a persistent bootstrap host that supplies all dedicated DC/OS hosts with the necessary installation files.

Since the bootstrap host is different from the host initiating the cluster creation passing the build_artifact via URL string saves the time of copying the build_artifact to the bootstrap host.

However, some backends may not support using a bootstrap node. For these backends, each node will download and extract the build artifact. This may be very slow, as the build artifact is downloaded to and extracted on each node, one at a time.

Parameters:
  • build_artifact – The URL string to a build artifact to install DC/OS from.
  • dcos_config – The contents of the DC/OS config.yaml.
  • ip_detect_path – The path to a ip-detect script that will be used when installing DC/OS.
  • files_to_copy_to_genconf_dir – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.
  • output – What happens with stdout and stderr.
Return type:

None

Destroying a Cluster

Clusters have a destroy() method. This can be called manually, or Clusters can be used as context managers. In this case the cluster will be destroyed when exiting the context manager.

with Cluster(backend=Docker(), masters=3, agents=2):
    pass
Cluster.destroy()

Destroy all nodes in the cluster.

Return type:None

Waiting for DC/OS

Depending on the hardware and the backend, DC/OS can take some time to install. The methods to wait for DC/OS repeatedly poll the cluster until services are up. Choose the wait_for_dcos_oss() or wait_for_dcos_ee() as appropriate.

Cluster.wait_for_dcos_oss(http_checks=True)

Wait until the DC/OS OSS boot process has completed.

Parameters:http_checks – Whether or not to wait for checks which involve HTTP. If this is False, this function may return before DC/OS is fully ready. This is useful in cases where an HTTP connection cannot be made to the cluster. For example, this is useful on macOS without a VPN set up.
Raises:dcos_e2e.exceptions.DCOSTimeoutError – Raised if cluster components did not become ready within one hour.
Return type:None
Cluster.wait_for_dcos_ee(superuser_username, superuser_password, http_checks=True)

Wait until the DC/OS Enterprise boot process has completed.

Parameters:
  • superuser_username – Username of the default superuser.
  • superuser_password – Password of the default superuser.
  • http_checks – Whether or not to wait for checks which involve HTTP. If this is False, this function may return before DC/OS is fully ready. This is useful in cases where an HTTP connection cannot be made to the cluster. For example, this is useful on macOS without a VPN set up.
Raises:

dcos_e2e.exceptions.DCOSTimeoutError – Raised if cluster components did not become ready within one hour.

Return type:

None

Running Integration Tests

It is possible to easily run DC/OS integration tests on a cluster. See how to run tests on DC/OS Enterprise.

with Cluster(backend=Docker()):
    cluster.run_integration_tests(pytest_command=['pytest', '-k', 'mesos'])
Cluster.run_integration_tests(pytest_command, env=None, output=<Output.CAPTURE: 2>, tty=False, test_host=None, transport=None)

Run integration tests on a random master node.

Parameters:
  • pytest_command – The pytest command to run on the node.
  • env – Environment variables to be set on the node before running the pytest_command. On enterprise clusters, DCOS_LOGIN_UNAME and DCOS_LOGIN_PW must be set.
  • output – What happens with stdout and stderr.
  • test_host – The node to run the given command on. if not given, an arbitrary master node is used.
  • tty – If True, allocate a pseudo-tty. This means that the users terminal is attached to the streams of the process. This means that the values of stdout and stderr will not be in the returned subprocess.CompletedProcess.
  • transport – The transport to use for communicating with nodes. If None, the Node’s default_transport is used.
Return type:

CompletedProcess

Returns:

The result of the pytest command.

Raises:

subprocess.CalledProcessError – If the pytest command fails.