The Cluster
class¶
Using DC/OS E2E usually involves creating one or more Cluster
s.
A cluster is created using a “backend”, which might be Docker or a cloud provider for example.
It is also possible to point DC/OS E2E to existing nodes.
A Cluster
object is then used to interact with the DC/OS cluster.
-
class
dcos_e2e.cluster.
Cluster
(cluster_backend, masters=1, agents=1, public_agents=1, files_to_copy_to_installer=())¶ Create a DC/OS cluster.
Parameters: - cluster_backend¶ – The backend to use for the cluster.
- masters¶ – The number of master nodes to create.
- agents¶ – The number of agent nodes to create.
- public_agents¶ – The number of public agent nodes to create.
- files_to_copy_to_installer¶ – Pairs of host paths to paths on the installer node. These are files to copy from the host to the installer node before installing DC/OS.
Choosing a Backend¶
See Backends for a backend to use for cluster_backend
.
Creating a Cluster
from Existing Node
s¶
It is possible to create a Cluster
from existing nodes.
Cluster
s created with this method cannot be destroyed by DC/OS E2E.
It is assumed that DC/OS is already up and running on the given Node
s and installing DC/OS is not supported.
-
classmethod
Cluster.
from_nodes
(masters, agents, public_agents)¶ Create a cluster from existing nodes.
Parameters: Return type: Returns: A cluster object with the nodes of an existing cluster.
Installing DC/OS¶
Some backends support installing DC/OS from a path to a build artifact. Some backends support installing DC/OS from a URL pointing to a build artifact.
Each backend comes with a default DC/OS configuration which is enough to start an open source DC/OS cluster.
The extra_config
parameter allows you to add to or override these configuration settings.
See how to use DC/OS Enterprise with DC/OS E2E.
-
Cluster.
install_dcos_from_path
(build_artifact, extra_config=None, log_output_live=False)¶ Parameters: - build_artifact¶ – The Path to a build artifact to install DC/OS from.
- extra_config¶ – Implementations may come with a “base” configuration. This dictionary can contain extra installation configuration variables.
- log_output_live¶ – If True, log output of the installation live. If True, stderr is merged into stdout in the return value.
Raises: NotImplementedError
– NotImplementedError because it is more efficient for the given backend to use the DC/OS advanced installation method that takes build artifacts by URL string.Return type: None
-
Cluster.
install_dcos_from_url
(build_artifact, extra_config=None, log_output_live=False)¶ Installs DC/OS using the DC/OS advanced installation method if supported by the backend.
This method spins up a persistent bootstrap host that supplies all dedicated DC/OS hosts with the necessary installation files.
Since the bootstrap host is different from the host initiating the cluster creation passing the
build_artifact
via URL string saves the time of copying thebuild_artifact
to the bootstrap host.Parameters: - build_artifact¶ – The URL string to a build artifact to install DC/OS from.
- extra_config¶ – Implementations may come with a “base” configuration. This dictionary can contain extra installation configuration variables.
- log_output_live¶ – If True, log output of the installation live. If True, stderr is merged into stdout in the return value.
Raises: NotImplementedError
– NotImplementedError because the given backend provides a more efficient installation method than the DC/OS advanced installation method.Return type: None
Destroying a Cluster
¶
Cluster
s have a destroy()
method.
This can be called manually, or Cluster
s can be used as context managers.
In this case the cluster will be destroyed when exiting the context manager.
with Cluster(backend=Docker(), masters=3, agents=2):
pass
-
Cluster.
destroy
()¶ Destroy all nodes in the cluster.
Return type: None
Waiting for DC/OS¶
Depending on the hardware and the backend, DC/OS can take some time to install.
The methods to wait for DC/OS repeatedly poll the cluster until services are up.
Choose the wait_for_dcos_oss()
or wait_for_dcos_ee()
as appropriate.
-
Cluster.
wait_for_dcos_oss
()¶ Wait until the DC/OS OSS boot process has completed.
Raises: RetryError
– Raised if any cluster component did not become healthy in time.Return type: None
-
Cluster.
wait_for_dcos_ee
(superuser_username, superuser_password)¶ Wait until the DC/OS Enterprise boot process has completed.
Parameters: Raises: RetryError
– Raised if any cluster component did not become healthy in time.Return type: None
Running Integration Tests¶
It is possible to easily run DC/OS integration tests on a cluster. See how to run tests on DC/OS Enterprise.
with Cluster(backend=Docker()):
cluster.run_integration_tests(pytest_command=['pytest', '-k', 'mesos'])
-
Cluster.
run_integration_tests
(pytest_command, env=None, log_output_live=False, tty=False)¶ Run integration tests on a random master node.
Parameters: - pytest_command¶ – The
pytest
command to run on the node. - env¶ – Environment variables to be set on the node before running
the pytest_command. On enterprise clusters,
DCOS_LOGIN_UNAME
andDCOS_LOGIN_PW
must be set. - log_output_live¶ – If
True
, log output of thepytest_command
live. IfTrue
,stderr
is merged intostdout
in the return value. - tty¶ – If
True
, allocate a pseudo-tty. This means that the users terminal is attached to the streams of the process. This means that the values of stdout and stderr will not be in the returnedsubprocess.CompletedProcess
.
Return type: Returns: The result of the
pytest
command.Raises: subprocess.CalledProcessError
– If thepytest
command fails.- pytest_command¶ – The