The Cluster class

Using DC/OS E2E usually involves creating one or more Clusters. A cluster is created using a “backend”, which might be Docker or a cloud provider for example. It is also possible to point DC/OS E2E to existing nodes. A Cluster object is then used to interact with the DC/OS cluster.

class dcos_e2e.cluster.Cluster(cluster_backend, masters=1, agents=1, public_agents=1, files_to_copy_to_installer=())

Create a DC/OS cluster.

Parameters:
  • cluster_backend (ClusterBackend) – The backend to use for the cluster.
  • masters (int) – The number of master nodes to create.
  • agents (int) – The number of agent nodes to create.
  • public_agents (int) – The number of public agent nodes to create.
  • files_to_copy_to_installer (Iterable[Tuple[Path, Path]]) – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.

Choosing a Backend

See Backends for a backend to use for cluster_backend.

Creating a Cluster from Existing Nodes

It is possible to create a Cluster from existing nodes. Clusters created with this method cannot be destroyed by DC/OS E2E. It is assumed that DC/OS is already up and running on the given Nodes and installing DC/OS is not supported.

classmethod Cluster.from_nodes(masters, agents, public_agents)

Create a cluster from existing nodes.

Parameters:
  • masters (Set[Node]) – The master nodes in an existing cluster.
  • agents (Set[Node]) – The agent nodes in an existing cluster.
  • public_agents (Set[Node]) – The public agent nodes in an existing cluster.
Return type:

Cluster

Returns:

A cluster object with the nodes of an existing cluster.

Installing DC/OS

Some backends support installing DC/OS from a path to a build artifact. Some backends support installing DC/OS from a URL pointing to a build artifact.

Each backend comes with a default DC/OS configuration which is enough to start an open source DC/OS cluster. The extra_config parameter allows you to add to or override these configuration settings. See how to use DC/OS Enterprise with DC/OS E2E.

Cluster.install_dcos_from_path(build_artifact, dcos_config, log_output_live=False)
Parameters:
  • build_artifact (Path) – The Path to a build artifact to install DC/OS from.
  • dcos_config (Dict[str, Any]) – The DC/OS configuration to use.
  • log_output_live (bool) – If True, log output of the installation live. If True, stderr is merged into stdout in the return value.
Raises:

NotImplementedErrorNotImplementedError because it is more efficient for the given backend to use the DC/OS advanced installation method that takes build artifacts by URL string.

Return type:

None

Cluster.install_dcos_from_url(build_artifact, dcos_config, log_output_live=False)

Installs DC/OS using the DC/OS advanced installation method if supported by the backend.

This method spins up a persistent bootstrap host that supplies all dedicated DC/OS hosts with the necessary installation files.

Since the bootstrap host is different from the host initiating the cluster creation passing the build_artifact via URL string saves the time of copying the build_artifact to the bootstrap host.

Parameters:
  • build_artifact (str) – The URL string to a build artifact to install DC/OS from.
  • dcos_config (Dict[str, Any]) – The DC/OS configuration to use.
  • log_output_live (bool) – If True, log output of the installation live. If True, stderr is merged into stdout in the return value.
Raises:

NotImplementedErrorNotImplementedError because the given backend provides a more efficient installation method than the DC/OS advanced installation method.

Return type:

None

Destroying a Cluster

Clusters have a destroy() method. This can be called manually, or Clusters can be used as context managers. In this case the cluster will be destroyed when exiting the context manager.

with Cluster(backend=Docker(), masters=3, agents=2):
    pass
Cluster.destroy()

Destroy all nodes in the cluster.

Return type:None

Waiting for DC/OS

Depending on the hardware and the backend, DC/OS can take some time to install. The methods to wait for DC/OS repeatedly poll the cluster until services are up. Choose the wait_for_dcos_oss() or wait_for_dcos_ee() as appropriate.

Cluster.wait_for_dcos_oss()

Wait until the DC/OS OSS boot process has completed.

Raises:RetryError – Raised if any cluster component did not become healthy in time.
Return type:None
Cluster.wait_for_dcos_ee(superuser_username, superuser_password)

Wait until the DC/OS Enterprise boot process has completed.

Parameters:
  • superuser_username (str) – Username of the default superuser.
  • superuser_password (str) – Password of the default superuser.
Raises:

RetryError – Raised if any cluster component did not become healthy in time.

Return type:

None

Running Integration Tests

It is possible to easily run DC/OS integration tests on a cluster. See how to run tests on DC/OS Enterprise.

with Cluster(backend=Docker()):
    cluster.run_integration_tests(pytest_command=['pytest', '-k', 'mesos'])
Cluster.run_integration_tests(pytest_command, env=None, log_output_live=False, tty=False, test_host=None)

Run integration tests on a random master node.

Parameters:
  • pytest_command (List[str]) – The pytest command to run on the node.
  • env (Optional[Dict[str, Any]]) – Environment variables to be set on the node before running the pytest_command. On enterprise clusters, DCOS_LOGIN_UNAME and DCOS_LOGIN_PW must be set.
  • log_output_live (bool) – If True, log output of the pytest_command live. If True, stderr is merged into stdout in the return value.
  • test_host (Optional[Node]) – The node to run the given command on. if not given, an arbitrary master node is used.
  • tty (bool) – If True, allocate a pseudo-tty. This means that the users terminal is attached to the streams of the process. This means that the values of stdout and stderr will not be in the returned subprocess.CompletedProcess.
Return type:

CompletedProcess

Returns:

The result of the pytest command.

Raises:

subprocess.CalledProcessError – If the pytest command fails.